Feb 24 14:48:57 crc systemd[1]: Starting Kubernetes Kubelet... Feb 24 14:48:57 crc restorecon[4673]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:57 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 14:48:58 crc restorecon[4673]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 24 14:48:58 crc kubenswrapper[4982]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 14:48:58 crc kubenswrapper[4982]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 14:48:58 crc kubenswrapper[4982]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 14:48:58 crc kubenswrapper[4982]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 14:48:58 crc kubenswrapper[4982]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 14:48:58 crc kubenswrapper[4982]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.858954 4982 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869657 4982 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869716 4982 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869727 4982 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869737 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869747 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869758 4982 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869768 4982 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869777 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869785 4982 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869794 4982 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869802 4982 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869810 4982 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869818 4982 feature_gate.go:330] unrecognized feature gate: Example Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869826 4982 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869833 4982 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869841 4982 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869850 4982 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869858 4982 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869866 4982 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869874 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869882 4982 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869891 4982 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869898 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869906 4982 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869914 4982 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869922 4982 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869930 4982 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869938 4982 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869946 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869953 4982 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869961 4982 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869969 4982 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.869991 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870000 4982 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870008 4982 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870018 4982 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870031 4982 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870042 4982 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870053 4982 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870063 4982 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870072 4982 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870082 4982 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870092 4982 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870106 4982 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870199 4982 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870209 4982 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870217 4982 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870226 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870235 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870243 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870251 4982 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870258 4982 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870266 4982 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870274 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870282 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870290 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870297 4982 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870305 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870313 4982 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870322 4982 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870331 4982 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870339 4982 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870346 4982 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870357 4982 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870367 4982 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870377 4982 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870385 4982 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870393 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870402 4982 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870410 4982 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.870418 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871449 4982 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871530 4982 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871559 4982 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871572 4982 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871584 4982 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871594 4982 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871607 4982 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871620 4982 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871630 4982 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871640 4982 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871650 4982 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871659 4982 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871669 4982 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871678 4982 flags.go:64] FLAG: --cgroup-root="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871687 4982 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871696 4982 flags.go:64] FLAG: --client-ca-file="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871705 4982 flags.go:64] FLAG: --cloud-config="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871715 4982 flags.go:64] FLAG: --cloud-provider="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871724 4982 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871747 4982 flags.go:64] FLAG: --cluster-domain="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871756 4982 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871766 4982 flags.go:64] FLAG: --config-dir="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871775 4982 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871785 4982 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871797 4982 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871806 4982 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871815 4982 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871825 4982 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871834 4982 flags.go:64] FLAG: --contention-profiling="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871844 4982 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871852 4982 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871862 4982 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871870 4982 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871889 4982 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871898 4982 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871907 4982 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871945 4982 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871954 4982 flags.go:64] FLAG: --enable-server="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871964 4982 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871984 4982 flags.go:64] FLAG: --event-burst="100" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.871994 4982 flags.go:64] FLAG: --event-qps="50" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872003 4982 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872013 4982 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872022 4982 flags.go:64] FLAG: --eviction-hard="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872034 4982 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872042 4982 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872051 4982 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872060 4982 flags.go:64] FLAG: --eviction-soft="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872069 4982 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872079 4982 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872088 4982 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872097 4982 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872107 4982 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872116 4982 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872127 4982 flags.go:64] FLAG: --feature-gates="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872146 4982 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872156 4982 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872166 4982 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872175 4982 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872185 4982 flags.go:64] FLAG: --healthz-port="10248" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872194 4982 flags.go:64] FLAG: --help="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872203 4982 flags.go:64] FLAG: --hostname-override="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872212 4982 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872221 4982 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872231 4982 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872243 4982 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872251 4982 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872260 4982 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872271 4982 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872280 4982 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872289 4982 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872298 4982 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872321 4982 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872331 4982 flags.go:64] FLAG: --kube-reserved="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872340 4982 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872349 4982 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872358 4982 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872367 4982 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872376 4982 flags.go:64] FLAG: --lock-file="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872385 4982 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872395 4982 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872404 4982 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872421 4982 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872430 4982 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872439 4982 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872448 4982 flags.go:64] FLAG: --logging-format="text" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872457 4982 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872466 4982 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872475 4982 flags.go:64] FLAG: --manifest-url="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872484 4982 flags.go:64] FLAG: --manifest-url-header="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872522 4982 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872532 4982 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872548 4982 flags.go:64] FLAG: --max-pods="110" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872558 4982 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872566 4982 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872575 4982 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872585 4982 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872622 4982 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872631 4982 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872640 4982 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872669 4982 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872679 4982 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872690 4982 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872701 4982 flags.go:64] FLAG: --pod-cidr="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872712 4982 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872727 4982 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872736 4982 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872745 4982 flags.go:64] FLAG: --pods-per-core="0" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872768 4982 flags.go:64] FLAG: --port="10250" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872777 4982 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872786 4982 flags.go:64] FLAG: --provider-id="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872795 4982 flags.go:64] FLAG: --qos-reserved="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872804 4982 flags.go:64] FLAG: --read-only-port="10255" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872813 4982 flags.go:64] FLAG: --register-node="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872822 4982 flags.go:64] FLAG: --register-schedulable="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872831 4982 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872848 4982 flags.go:64] FLAG: --registry-burst="10" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872857 4982 flags.go:64] FLAG: --registry-qps="5" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872865 4982 flags.go:64] FLAG: --reserved-cpus="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872874 4982 flags.go:64] FLAG: --reserved-memory="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872885 4982 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872895 4982 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872903 4982 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872912 4982 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872921 4982 flags.go:64] FLAG: --runonce="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872930 4982 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872939 4982 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872948 4982 flags.go:64] FLAG: --seccomp-default="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872958 4982 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872966 4982 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872978 4982 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872988 4982 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.872997 4982 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873006 4982 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873015 4982 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873024 4982 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873032 4982 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873042 4982 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873051 4982 flags.go:64] FLAG: --system-cgroups="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873060 4982 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873076 4982 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873085 4982 flags.go:64] FLAG: --tls-cert-file="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873095 4982 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873111 4982 flags.go:64] FLAG: --tls-min-version="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873132 4982 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873141 4982 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873150 4982 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873159 4982 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873168 4982 flags.go:64] FLAG: --v="2" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873181 4982 flags.go:64] FLAG: --version="false" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873192 4982 flags.go:64] FLAG: --vmodule="" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873203 4982 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.873212 4982 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873602 4982 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873617 4982 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873627 4982 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873639 4982 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873648 4982 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873658 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873666 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873675 4982 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873684 4982 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873699 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873707 4982 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873715 4982 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873724 4982 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873732 4982 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873739 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873750 4982 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873761 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873770 4982 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873778 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873788 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873796 4982 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873804 4982 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873813 4982 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873821 4982 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873832 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873841 4982 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873851 4982 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873874 4982 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873882 4982 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873891 4982 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873905 4982 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873915 4982 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873924 4982 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873932 4982 feature_gate.go:330] unrecognized feature gate: Example Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873941 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873949 4982 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873957 4982 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873965 4982 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873973 4982 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873981 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.873989 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874003 4982 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874012 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874021 4982 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874031 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874039 4982 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874048 4982 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874057 4982 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874065 4982 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874073 4982 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874081 4982 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874090 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874099 4982 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874107 4982 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874142 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874151 4982 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874159 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874180 4982 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874188 4982 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874199 4982 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874209 4982 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874219 4982 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874231 4982 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874254 4982 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874263 4982 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874271 4982 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874279 4982 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874288 4982 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874296 4982 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874305 4982 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.874313 4982 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.874326 4982 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.895560 4982 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.895637 4982 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896414 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896442 4982 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896451 4982 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896460 4982 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896468 4982 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896477 4982 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896530 4982 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896539 4982 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896547 4982 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896555 4982 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896563 4982 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896574 4982 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896587 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896599 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896610 4982 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896619 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896630 4982 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896640 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896651 4982 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896660 4982 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896668 4982 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896675 4982 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896683 4982 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896691 4982 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896700 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896710 4982 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896721 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896729 4982 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896740 4982 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896751 4982 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896760 4982 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896769 4982 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896777 4982 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896785 4982 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896793 4982 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896801 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896809 4982 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896817 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896825 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896833 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896841 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896848 4982 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896856 4982 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896863 4982 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896871 4982 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896879 4982 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896891 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896899 4982 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896907 4982 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896915 4982 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896923 4982 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896931 4982 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896939 4982 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896947 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.896957 4982 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897137 4982 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897144 4982 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897152 4982 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897160 4982 feature_gate.go:330] unrecognized feature gate: Example Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897167 4982 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897175 4982 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897183 4982 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897192 4982 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897199 4982 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897207 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897216 4982 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897223 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897231 4982 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897238 4982 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897246 4982 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897253 4982 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.897268 4982 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897541 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897555 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897563 4982 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897573 4982 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897584 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897593 4982 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897603 4982 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897611 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897619 4982 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897627 4982 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897635 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897643 4982 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897651 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897658 4982 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897666 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897674 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897682 4982 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897689 4982 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897746 4982 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897757 4982 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897766 4982 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897775 4982 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897783 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897792 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897800 4982 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897811 4982 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897821 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897829 4982 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897838 4982 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897848 4982 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897858 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897867 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897875 4982 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897883 4982 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897892 4982 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897900 4982 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897908 4982 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897916 4982 feature_gate.go:330] unrecognized feature gate: Example Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897924 4982 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897932 4982 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897943 4982 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897952 4982 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897961 4982 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897969 4982 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897977 4982 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897986 4982 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.897995 4982 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898003 4982 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898012 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898020 4982 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898029 4982 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898037 4982 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898045 4982 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898052 4982 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898061 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898069 4982 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898078 4982 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898085 4982 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898093 4982 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898101 4982 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898108 4982 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898116 4982 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898125 4982 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898132 4982 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898140 4982 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898147 4982 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898157 4982 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898165 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898172 4982 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898180 4982 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 14:48:58 crc kubenswrapper[4982]: W0224 14:48:58.898190 4982 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.898202 4982 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.898487 4982 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 14:48:58 crc kubenswrapper[4982]: E0224 14:48:58.905035 4982 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.910144 4982 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.910317 4982 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.912554 4982 server.go:997] "Starting client certificate rotation" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.913191 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.913406 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.939696 4982 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.942898 4982 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 14:48:58 crc kubenswrapper[4982]: E0224 14:48:58.943070 4982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:48:58 crc kubenswrapper[4982]: I0224 14:48:58.964193 4982 log.go:25] "Validated CRI v1 runtime API" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.006860 4982 log.go:25] "Validated CRI v1 image API" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.009180 4982 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.015191 4982 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-24-14-44-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.015240 4982 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.044317 4982 manager.go:217] Machine: {Timestamp:2026-02-24 14:48:59.040746855 +0000 UTC m=+0.659805408 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1f50fe44-226c-4567-9cfd-6e69cfb222c6 BootID:f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:47:36:fe Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:47:36:fe Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:30:70:eb Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c2:ce:96 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5a:cc:58 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:89:01:5b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:fd:f5:c3:5a:cd Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:6c:9d:ab:0f:63 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.044804 4982 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.045050 4982 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.046807 4982 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.047127 4982 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.047191 4982 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.047607 4982 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.047628 4982 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.048397 4982 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.048452 4982 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.048839 4982 state_mem.go:36] "Initialized new in-memory state store" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.048998 4982 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.052761 4982 kubelet.go:418] "Attempting to sync node with API server" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.052799 4982 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.052837 4982 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.052862 4982 kubelet.go:324] "Adding apiserver pod source" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.052881 4982 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.057661 4982 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.058815 4982 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.058841 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.058989 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.059491 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.059628 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.060408 4982 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062093 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062139 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062156 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062170 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062191 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062205 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062219 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062241 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062256 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062271 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062289 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.062309 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.063438 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.064535 4982 server.go:1280] "Started kubelet" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.066412 4982 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.066797 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.066387 4982 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 14:48:59 crc systemd[1]: Started Kubernetes Kubelet. Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.071134 4982 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.072460 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.072750 4982 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.073679 4982 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.074203 4982 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.074242 4982 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.075354 4982 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.077744 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.082570 4982 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.082610 4982 factory.go:55] Registering systemd factory Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.082628 4982 factory.go:221] Registration of the systemd container factory successfully Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.083198 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.083358 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.083049 4982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897362b8ca17f0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.064467213 +0000 UTC m=+0.683525736,LastTimestamp:2026-02-24 14:48:59.064467213 +0000 UTC m=+0.683525736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.085589 4982 factory.go:153] Registering CRI-O factory Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.085648 4982 factory.go:221] Registration of the crio container factory successfully Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.085698 4982 factory.go:103] Registering Raw factory Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.085732 4982 manager.go:1196] Started watching for new ooms in manager Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.086027 4982 server.go:460] "Adding debug handlers to kubelet server" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.086989 4982 manager.go:319] Starting recovery of all containers Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093327 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093381 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093406 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093421 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093435 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093455 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093468 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093484 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093520 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093535 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093549 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093564 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093578 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093603 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093617 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093632 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093645 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093659 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093673 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093699 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093712 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093727 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093741 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093755 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093769 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093784 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093801 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093817 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093832 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093846 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093859 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093873 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093891 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093904 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093918 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093934 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093947 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093960 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093974 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.093988 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094002 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094016 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094031 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094045 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094095 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094110 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094124 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094138 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094153 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094167 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094183 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094198 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094218 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094233 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094303 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094321 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094339 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094354 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094393 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094406 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094419 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094433 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094447 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094461 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094476 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094513 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094529 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094542 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094555 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094569 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094582 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094601 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094617 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094630 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094643 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094657 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094670 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094684 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094696 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094710 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094723 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094782 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094804 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094816 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094829 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094844 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094859 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094872 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094888 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094902 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094916 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094929 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094948 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094962 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.094997 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095011 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095024 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095038 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095054 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095068 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095081 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095094 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095106 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095119 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095138 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095151 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095167 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095192 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095206 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095220 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095240 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095254 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095276 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095293 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095312 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095324 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095336 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095359 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095371 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095383 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095405 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095418 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095430 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095443 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095455 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095469 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095512 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095526 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095538 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095551 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095564 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095575 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095587 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095600 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095612 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095623 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095638 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095651 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095662 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095674 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095686 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095699 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095718 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095733 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095745 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095758 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095774 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095786 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.095808 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.097909 4982 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.097953 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.097977 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098004 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098025 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098045 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098065 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098086 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098111 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098131 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098150 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098182 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098202 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098226 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098245 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098265 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098283 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098306 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098328 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098349 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098367 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098399 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098440 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098459 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098478 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098525 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098546 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098569 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098587 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098609 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098631 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098650 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098669 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098688 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098707 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098728 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098747 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098766 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098786 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098805 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098824 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098843 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098865 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098907 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098938 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098957 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.098987 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099011 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099031 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099048 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099070 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099090 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099122 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099141 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099160 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099179 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099201 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099220 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099239 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099258 4982 reconstruct.go:97] "Volume reconstruction finished" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.099272 4982 reconciler.go:26] "Reconciler: start to sync state" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.127744 4982 manager.go:324] Recovery completed Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.141774 4982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.144111 4982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.144199 4982 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.144246 4982 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.144320 4982 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.145601 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.145677 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.146048 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.148154 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.148217 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.148236 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.149614 4982 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.149644 4982 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.149676 4982 state_mem.go:36] "Initialized new in-memory state store" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.174459 4982 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.176414 4982 policy_none.go:49] "None policy: Start" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.177519 4982 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.177566 4982 state_mem.go:35] "Initializing new in-memory state store" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.244479 4982 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.244853 4982 manager.go:334] "Starting Device Plugin manager" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.244961 4982 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.245558 4982 server.go:79] "Starting device plugin registration server" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.246241 4982 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.246273 4982 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.246467 4982 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.246630 4982 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.246661 4982 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.262122 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.279173 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.347385 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.348822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.348887 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.348907 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.348950 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.349801 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.445343 4982 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.445549 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.447999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.448059 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.448079 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.448280 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.448880 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.448983 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.449630 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.449679 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.449696 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.449860 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.450165 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.450248 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.450924 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.450990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.451014 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.451812 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.451860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.451880 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.452022 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.452102 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.452131 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.452149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.452198 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.452248 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453145 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453198 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453357 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453486 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453567 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453568 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.453724 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.456331 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.456528 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.456565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.457319 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.457388 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.457421 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.458076 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.458361 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.462269 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.462318 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.462339 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.505885 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.505945 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.505983 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506015 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506048 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506079 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506110 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506138 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506211 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506369 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506438 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506469 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506618 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506686 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.506748 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.550202 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.552060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.552118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.552208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.552297 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.553086 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608449 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608590 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608760 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608692 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608821 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608767 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608873 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608929 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.608974 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609039 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609065 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609100 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609150 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609166 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609207 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609239 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609276 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609268 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609311 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609283 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609395 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609433 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609446 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609471 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609536 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609540 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609580 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609603 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609625 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.609604 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.680595 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.799677 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.827262 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.842168 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.853627 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.857022 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1386024eb53657813faf0324e732f680b162a3c8ffa98336a1e66c5c251f89a8 WatchSource:0}: Error finding container 1386024eb53657813faf0324e732f680b162a3c8ffa98336a1e66c5c251f89a8: Status 404 returned error can't find the container with id 1386024eb53657813faf0324e732f680b162a3c8ffa98336a1e66c5c251f89a8 Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.869195 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9eb39dc95db460706e5cfc1b2393e0ea5bdc589af3e26ae22027d82a4cf8cc5e WatchSource:0}: Error finding container 9eb39dc95db460706e5cfc1b2393e0ea5bdc589af3e26ae22027d82a4cf8cc5e: Status 404 returned error can't find the container with id 9eb39dc95db460706e5cfc1b2393e0ea5bdc589af3e26ae22027d82a4cf8cc5e Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.873275 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.886832 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-52b607c6960cdb6ef9fd9fc33d7dc02582f536949af895b339a3fe2a7640f3c0 WatchSource:0}: Error finding container 52b607c6960cdb6ef9fd9fc33d7dc02582f536949af895b339a3fe2a7640f3c0: Status 404 returned error can't find the container with id 52b607c6960cdb6ef9fd9fc33d7dc02582f536949af895b339a3fe2a7640f3c0 Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.889694 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8c3cf27aef7b5bbf31e74787e0e6e12461d44c91765632d3dd65ef3fde91f141 WatchSource:0}: Error finding container 8c3cf27aef7b5bbf31e74787e0e6e12461d44c91765632d3dd65ef3fde91f141: Status 404 returned error can't find the container with id 8c3cf27aef7b5bbf31e74787e0e6e12461d44c91765632d3dd65ef3fde91f141 Feb 24 14:48:59 crc kubenswrapper[4982]: W0224 14:48:59.907670 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-738548ad173a4c04f97530f07ecdfefaeccefb9c4473afb6804bc064950c4970 WatchSource:0}: Error finding container 738548ad173a4c04f97530f07ecdfefaeccefb9c4473afb6804bc064950c4970: Status 404 returned error can't find the container with id 738548ad173a4c04f97530f07ecdfefaeccefb9c4473afb6804bc064950c4970 Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.953698 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.956052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.956122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.956146 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:48:59 crc kubenswrapper[4982]: I0224 14:48:59.956195 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:48:59 crc kubenswrapper[4982]: E0224 14:48:59.957045 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Feb 24 14:49:00 crc kubenswrapper[4982]: W0224 14:49:00.066639 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:00 crc kubenswrapper[4982]: E0224 14:49:00.066799 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.068462 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.150244 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"738548ad173a4c04f97530f07ecdfefaeccefb9c4473afb6804bc064950c4970"} Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.152177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8c3cf27aef7b5bbf31e74787e0e6e12461d44c91765632d3dd65ef3fde91f141"} Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.154004 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52b607c6960cdb6ef9fd9fc33d7dc02582f536949af895b339a3fe2a7640f3c0"} Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.155821 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9eb39dc95db460706e5cfc1b2393e0ea5bdc589af3e26ae22027d82a4cf8cc5e"} Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.157460 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1386024eb53657813faf0324e732f680b162a3c8ffa98336a1e66c5c251f89a8"} Feb 24 14:49:00 crc kubenswrapper[4982]: W0224 14:49:00.277571 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:00 crc kubenswrapper[4982]: E0224 14:49:00.277714 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:00 crc kubenswrapper[4982]: W0224 14:49:00.305043 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:00 crc kubenswrapper[4982]: E0224 14:49:00.305158 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:00 crc kubenswrapper[4982]: W0224 14:49:00.329261 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:00 crc kubenswrapper[4982]: E0224 14:49:00.329412 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:00 crc kubenswrapper[4982]: E0224 14:49:00.482082 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.757735 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.760381 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.760446 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.760472 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:00 crc kubenswrapper[4982]: I0224 14:49:00.760541 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:00 crc kubenswrapper[4982]: E0224 14:49:00.761344 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.035297 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 14:49:01 crc kubenswrapper[4982]: E0224 14:49:01.037581 4982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.068756 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.163148 4982 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784" exitCode=0 Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.163277 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784"} Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.163345 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.164889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.164933 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.164945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.166631 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248"} Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.166703 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6"} Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.169010 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10" exitCode=0 Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.169092 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10"} Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.169247 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.170818 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.170884 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.170909 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.172582 4982 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f" exitCode=0 Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.172710 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.172656 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f"} Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.175689 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.175717 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.175729 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.176721 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.177950 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.177992 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.178007 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.178409 4982 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083" exitCode=0 Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.178486 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083"} Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.178634 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.181972 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.182015 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:01 crc kubenswrapper[4982]: I0224 14:49:01.182031 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:01 crc kubenswrapper[4982]: W0224 14:49:01.966641 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:01 crc kubenswrapper[4982]: E0224 14:49:01.966789 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.067652 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:02 crc kubenswrapper[4982]: E0224 14:49:02.083591 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Feb 24 14:49:02 crc kubenswrapper[4982]: W0224 14:49:02.169809 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:02 crc kubenswrapper[4982]: E0224 14:49:02.169918 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.183016 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.183070 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.184029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.184074 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.184083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.187379 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0821facce4cb4bd700f535ccd91521a033cc2581ef0559a856bd016d0678e7ec"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.187425 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a4dce29a524baf6b13d3fb3167c75984a0ee416365faae953bf869a657a9061"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.187463 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"32ebbde60bef341e982e1e7fb5c237c5d5eedfdb72c4aa81627ecd582fc60ec5"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.187568 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.188749 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.188779 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.188790 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.191083 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.191122 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.191185 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.192311 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.192433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.192537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.193770 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.193915 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.195243 4982 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6" exitCode=0 Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.195284 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6"} Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.195407 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.196340 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.196379 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.196391 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:02 crc kubenswrapper[4982]: W0224 14:49:02.208139 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:02 crc kubenswrapper[4982]: E0224 14:49:02.208324 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:02 crc kubenswrapper[4982]: W0224 14:49:02.327779 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Feb 24 14:49:02 crc kubenswrapper[4982]: E0224 14:49:02.327871 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.362555 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.364329 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.364362 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.364373 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:02 crc kubenswrapper[4982]: I0224 14:49:02.364402 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:02 crc kubenswrapper[4982]: E0224 14:49:02.364957 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.203382 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e"} Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.203446 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.203448 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75568c5e11dac427d7cd93bffd4634a6ffa93c4a9af58121df063c3cb9be3f19"} Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.203570 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4"} Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.205171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.205362 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.205594 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.207443 4982 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138" exitCode=0 Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.207572 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.207625 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.207678 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.207677 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.207799 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.208021 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138"} Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.208393 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.208426 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.208438 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.209322 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.209378 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.209398 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.209390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.209573 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.209593 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.212337 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.212364 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.212372 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.368114 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:03 crc kubenswrapper[4982]: I0224 14:49:03.376883 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.024830 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.216408 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435"} Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.216472 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.216545 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.216598 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.216545 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19"} Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.216682 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.216743 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a"} Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.217758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.217800 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.217813 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.218121 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.218164 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.218182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.946805 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.947128 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.949001 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.949063 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:04 crc kubenswrapper[4982]: I0224 14:49:04.949088 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.202364 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.225597 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.225639 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.225652 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.225427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99"} Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227006 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700"} Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227636 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227700 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227700 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227772 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227961 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.227989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.375969 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.378904 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.565141 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.567258 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.567366 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.567419 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:05 crc kubenswrapper[4982]: I0224 14:49:05.567613 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:06 crc kubenswrapper[4982]: I0224 14:49:06.228578 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:06 crc kubenswrapper[4982]: I0224 14:49:06.229597 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:06 crc kubenswrapper[4982]: I0224 14:49:06.230435 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:06 crc kubenswrapper[4982]: I0224 14:49:06.230532 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:06 crc kubenswrapper[4982]: I0224 14:49:06.230561 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:06 crc kubenswrapper[4982]: I0224 14:49:06.231028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:06 crc kubenswrapper[4982]: I0224 14:49:06.231082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:06 crc kubenswrapper[4982]: I0224 14:49:06.231106 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.232235 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.233767 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.233838 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.233851 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.958457 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.958772 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.960433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.960495 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:07 crc kubenswrapper[4982]: I0224 14:49:07.960547 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:08 crc kubenswrapper[4982]: I0224 14:49:08.736698 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:08 crc kubenswrapper[4982]: I0224 14:49:08.736919 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:08 crc kubenswrapper[4982]: I0224 14:49:08.741433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:08 crc kubenswrapper[4982]: I0224 14:49:08.741677 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:08 crc kubenswrapper[4982]: I0224 14:49:08.741830 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:09 crc kubenswrapper[4982]: E0224 14:49:09.262333 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.616151 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.616447 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.618111 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.618178 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.618198 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.721247 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.721384 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.723144 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.723225 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.723247 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.737184 4982 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 14:49:11 crc kubenswrapper[4982]: I0224 14:49:11.737318 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 14:49:12 crc kubenswrapper[4982]: I0224 14:49:12.113568 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:12 crc kubenswrapper[4982]: I0224 14:49:12.247118 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:12 crc kubenswrapper[4982]: I0224 14:49:12.248758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:12 crc kubenswrapper[4982]: I0224 14:49:12.248823 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:12 crc kubenswrapper[4982]: I0224 14:49:12.248841 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:13 crc kubenswrapper[4982]: I0224 14:49:13.069059 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.025615 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.025731 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.101631 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z Feb 24 14:49:14 crc kubenswrapper[4982]: E0224 14:49:14.102586 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 24 14:49:14 crc kubenswrapper[4982]: E0224 14:49:14.102952 4982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 14:49:14 crc kubenswrapper[4982]: W0224 14:49:14.104783 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z Feb 24 14:49:14 crc kubenswrapper[4982]: E0224 14:49:14.104882 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 14:49:14 crc kubenswrapper[4982]: W0224 14:49:14.107305 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z Feb 24 14:49:14 crc kubenswrapper[4982]: E0224 14:49:14.107414 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 14:49:14 crc kubenswrapper[4982]: W0224 14:49:14.108348 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z Feb 24 14:49:14 crc kubenswrapper[4982]: E0224 14:49:14.108453 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 14:49:14 crc kubenswrapper[4982]: E0224 14:49:14.109685 4982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897362b8ca17f0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.064467213 +0000 UTC m=+0.683525736,LastTimestamp:2026-02-24 14:48:59.064467213 +0000 UTC m=+0.683525736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:14 crc kubenswrapper[4982]: E0224 14:49:14.112564 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 14:49:14 crc kubenswrapper[4982]: W0224 14:49:14.113352 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z Feb 24 14:49:14 crc kubenswrapper[4982]: E0224 14:49:14.113407 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.117991 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.118073 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.254198 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.255937 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75568c5e11dac427d7cd93bffd4634a6ffa93c4a9af58121df063c3cb9be3f19" exitCode=255 Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.255984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"75568c5e11dac427d7cd93bffd4634a6ffa93c4a9af58121df063c3cb9be3f19"} Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.256192 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.257229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.257265 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.257278 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:14 crc kubenswrapper[4982]: I0224 14:49:14.257967 4982 scope.go:117] "RemoveContainer" containerID="75568c5e11dac427d7cd93bffd4634a6ffa93c4a9af58121df063c3cb9be3f19" Feb 24 14:49:15 crc kubenswrapper[4982]: I0224 14:49:15.073172 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:15Z is after 2026-02-23T05:33:13Z Feb 24 14:49:15 crc kubenswrapper[4982]: I0224 14:49:15.261636 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 14:49:15 crc kubenswrapper[4982]: I0224 14:49:15.263839 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095"} Feb 24 14:49:15 crc kubenswrapper[4982]: I0224 14:49:15.264015 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:15 crc kubenswrapper[4982]: I0224 14:49:15.265248 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:15 crc kubenswrapper[4982]: I0224 14:49:15.265305 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:15 crc kubenswrapper[4982]: I0224 14:49:15.265324 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.072630 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:16Z is after 2026-02-23T05:33:13Z Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.276102 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.276904 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.280039 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095" exitCode=255 Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.280134 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095"} Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.280246 4982 scope.go:117] "RemoveContainer" containerID="75568c5e11dac427d7cd93bffd4634a6ffa93c4a9af58121df063c3cb9be3f19" Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.280412 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.282635 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.282704 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.282727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:16 crc kubenswrapper[4982]: I0224 14:49:16.283641 4982 scope.go:117] "RemoveContainer" containerID="bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095" Feb 24 14:49:16 crc kubenswrapper[4982]: E0224 14:49:16.284006 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:17 crc kubenswrapper[4982]: I0224 14:49:17.073119 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:17Z is after 2026-02-23T05:33:13Z Feb 24 14:49:17 crc kubenswrapper[4982]: I0224 14:49:17.285412 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 14:49:18 crc kubenswrapper[4982]: I0224 14:49:18.072251 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:18Z is after 2026-02-23T05:33:13Z Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.034618 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.034887 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.036885 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.036955 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.036978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.038052 4982 scope.go:117] "RemoveContainer" containerID="bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095" Feb 24 14:49:19 crc kubenswrapper[4982]: E0224 14:49:19.038359 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.048579 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.073468 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:49:19Z is after 2026-02-23T05:33:13Z Feb 24 14:49:19 crc kubenswrapper[4982]: E0224 14:49:19.262699 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.294066 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.296004 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.296079 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.296106 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:19 crc kubenswrapper[4982]: I0224 14:49:19.297286 4982 scope.go:117] "RemoveContainer" containerID="bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095" Feb 24 14:49:19 crc kubenswrapper[4982]: E0224 14:49:19.297634 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:20 crc kubenswrapper[4982]: I0224 14:49:20.076720 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:20 crc kubenswrapper[4982]: E0224 14:49:20.508619 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 14:49:20 crc kubenswrapper[4982]: I0224 14:49:20.512951 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:20 crc kubenswrapper[4982]: I0224 14:49:20.514651 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:20 crc kubenswrapper[4982]: I0224 14:49:20.514718 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:20 crc kubenswrapper[4982]: I0224 14:49:20.514738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:20 crc kubenswrapper[4982]: I0224 14:49:20.514779 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:20 crc kubenswrapper[4982]: E0224 14:49:20.521619 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.074721 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:21 crc kubenswrapper[4982]: W0224 14:49:21.461537 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 24 14:49:21 crc kubenswrapper[4982]: E0224 14:49:21.461623 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.661238 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.661557 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.664146 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.664229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.664252 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.685291 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.719900 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.720131 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.721816 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.721892 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.721918 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.722963 4982 scope.go:117] "RemoveContainer" containerID="bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095" Feb 24 14:49:21 crc kubenswrapper[4982]: E0224 14:49:21.723274 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.737602 4982 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 14:49:21 crc kubenswrapper[4982]: I0224 14:49:21.737688 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 14:49:22 crc kubenswrapper[4982]: I0224 14:49:22.073292 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:22 crc kubenswrapper[4982]: I0224 14:49:22.220753 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 14:49:22 crc kubenswrapper[4982]: I0224 14:49:22.240076 4982 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 14:49:22 crc kubenswrapper[4982]: I0224 14:49:22.303586 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:22 crc kubenswrapper[4982]: I0224 14:49:22.304957 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:22 crc kubenswrapper[4982]: I0224 14:49:22.305010 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:22 crc kubenswrapper[4982]: I0224 14:49:22.305029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:23 crc kubenswrapper[4982]: I0224 14:49:23.072571 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:24 crc kubenswrapper[4982]: I0224 14:49:24.074107 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.118963 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b8ca17f0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.064467213 +0000 UTC m=+0.683525736,LastTimestamp:2026-02-24 14:48:59.064467213 +0000 UTC m=+0.683525736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.127782 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f0b6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,LastTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.129766 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f997e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,LastTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.137765 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919fe70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148248842 +0000 UTC m=+0.767307365,LastTimestamp:2026-02-24 14:48:59.148248842 +0000 UTC m=+0.767307365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.145017 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b9871dcf9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.262672121 +0000 UTC m=+0.881730654,LastTimestamp:2026-02-24 14:48:59.262672121 +0000 UTC m=+0.881730654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.152777 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f0b6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f0b6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,LastTimestamp:2026-02-24 14:48:59.348865336 +0000 UTC m=+0.967923859,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.160537 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f997e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f997e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,LastTimestamp:2026-02-24 14:48:59.348900268 +0000 UTC m=+0.967958801,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.167650 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919fe70a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919fe70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148248842 +0000 UTC m=+0.767307365,LastTimestamp:2026-02-24 14:48:59.348916979 +0000 UTC m=+0.967975512,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.174934 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f0b6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f0b6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,LastTimestamp:2026-02-24 14:48:59.44803924 +0000 UTC m=+1.067097773,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.182223 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f997e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f997e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,LastTimestamp:2026-02-24 14:48:59.448072431 +0000 UTC m=+1.067130964,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.189062 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919fe70a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919fe70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148248842 +0000 UTC m=+0.767307365,LastTimestamp:2026-02-24 14:48:59.448099793 +0000 UTC m=+1.067158316,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.197677 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f0b6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f0b6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,LastTimestamp:2026-02-24 14:48:59.449661802 +0000 UTC m=+1.068720335,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.204837 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f997e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f997e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,LastTimestamp:2026-02-24 14:48:59.449690493 +0000 UTC m=+1.068749026,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.212051 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919fe70a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919fe70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148248842 +0000 UTC m=+0.767307365,LastTimestamp:2026-02-24 14:48:59.449706444 +0000 UTC m=+1.068764977,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.220025 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f0b6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f0b6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,LastTimestamp:2026-02-24 14:48:59.450967166 +0000 UTC m=+1.070025689,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.228652 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f997e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f997e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,LastTimestamp:2026-02-24 14:48:59.451002838 +0000 UTC m=+1.070061371,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.236078 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919fe70a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919fe70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148248842 +0000 UTC m=+0.767307365,LastTimestamp:2026-02-24 14:48:59.451025159 +0000 UTC m=+1.070083682,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.243309 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f0b6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f0b6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,LastTimestamp:2026-02-24 14:48:59.451841816 +0000 UTC m=+1.070900349,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.251260 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f997e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f997e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,LastTimestamp:2026-02-24 14:48:59.451871308 +0000 UTC m=+1.070929841,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.259321 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919fe70a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919fe70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148248842 +0000 UTC m=+0.767307365,LastTimestamp:2026-02-24 14:48:59.451890719 +0000 UTC m=+1.070949252,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.266425 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f0b6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f0b6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,LastTimestamp:2026-02-24 14:48:59.452122692 +0000 UTC m=+1.071181225,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.273328 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f997e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f997e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,LastTimestamp:2026-02-24 14:48:59.452143403 +0000 UTC m=+1.071201936,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.280598 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919fe70a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919fe70a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148248842 +0000 UTC m=+0.767307365,LastTimestamp:2026-02-24 14:48:59.452159414 +0000 UTC m=+1.071217937,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.288717 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f0b6a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f0b6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.148192618 +0000 UTC m=+0.767251151,LastTimestamp:2026-02-24 14:48:59.453174012 +0000 UTC m=+1.072232535,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.296376 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897362b919f997e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897362b919f997e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.14822899 +0000 UTC m=+0.767287523,LastTimestamp:2026-02-24 14:48:59.453192403 +0000 UTC m=+1.072250936,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.306114 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897362bbc8781b7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.868070327 +0000 UTC m=+1.487128850,LastTimestamp:2026-02-24 14:48:59.868070327 +0000 UTC m=+1.487128850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.312828 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362bbd038e17 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.876199959 +0000 UTC m=+1.495258492,LastTimestamp:2026-02-24 14:48:59.876199959 +0000 UTC m=+1.495258492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.320713 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362bbe17bf65 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.894300517 +0000 UTC m=+1.513359050,LastTimestamp:2026-02-24 14:48:59.894300517 +0000 UTC m=+1.513359050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.325697 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362bbe17cb1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.894303517 +0000 UTC m=+1.513362050,LastTimestamp:2026-02-24 14:48:59.894303517 +0000 UTC m=+1.513362050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.334613 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362bbf7fd44b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:48:59.917898827 +0000 UTC m=+1.536957350,LastTimestamp:2026-02-24 14:48:59.917898827 +0000 UTC m=+1.536957350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.337338 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362bea0cc9e8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.631779816 +0000 UTC m=+2.250838349,LastTimestamp:2026-02-24 14:49:00.631779816 +0000 UTC m=+2.250838349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.345141 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362bea1df115 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.632903957 +0000 UTC m=+2.251962490,LastTimestamp:2026-02-24 14:49:00.632903957 +0000 UTC m=+2.251962490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.352936 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897362bea62f02c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.637425708 +0000 UTC m=+2.256484241,LastTimestamp:2026-02-24 14:49:00.637425708 +0000 UTC m=+2.256484241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.360900 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362bea7d89a7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.639168935 +0000 UTC m=+2.258227468,LastTimestamp:2026-02-24 14:49:00.639168935 +0000 UTC m=+2.258227468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.365983 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362beacc3342 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.644324162 +0000 UTC m=+2.263382695,LastTimestamp:2026-02-24 14:49:00.644324162 +0000 UTC m=+2.263382695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.371099 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362beb2de627 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.650726951 +0000 UTC m=+2.269785484,LastTimestamp:2026-02-24 14:49:00.650726951 +0000 UTC m=+2.269785484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.378115 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362beb4ecbac openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.65288286 +0000 UTC m=+2.271941393,LastTimestamp:2026-02-24 14:49:00.65288286 +0000 UTC m=+2.271941393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.384680 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362beb900810 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.65715816 +0000 UTC m=+2.276216683,LastTimestamp:2026-02-24 14:49:00.65715816 +0000 UTC m=+2.276216683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.391159 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362bebcd70b7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.661182647 +0000 UTC m=+2.280241170,LastTimestamp:2026-02-24 14:49:00.661182647 +0000 UTC m=+2.280241170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.397665 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897362bec15d1df openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.665926111 +0000 UTC m=+2.284984634,LastTimestamp:2026-02-24 14:49:00.665926111 +0000 UTC m=+2.284984634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.404220 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362becae9eea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.675940074 +0000 UTC m=+2.294998607,LastTimestamp:2026-02-24 14:49:00.675940074 +0000 UTC m=+2.294998607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.413335 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c0149e93d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.021661501 +0000 UTC m=+2.640720034,LastTimestamp:2026-02-24 14:49:01.021661501 +0000 UTC m=+2.640720034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.419904 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c041d51ce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.069070798 +0000 UTC m=+2.688129321,LastTimestamp:2026-02-24 14:49:01.069070798 +0000 UTC m=+2.688129321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.426987 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c0439ccbb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.070937275 +0000 UTC m=+2.689995798,LastTimestamp:2026-02-24 14:49:01.070937275 +0000 UTC m=+2.689995798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.433622 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c09f32e0f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.166972431 +0000 UTC m=+2.786030964,LastTimestamp:2026-02-24 14:49:01.166972431 +0000 UTC m=+2.786030964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.440251 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c0a857c13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.176560659 +0000 UTC m=+2.795619162,LastTimestamp:2026-02-24 14:49:01.176560659 +0000 UTC m=+2.795619162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.444747 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897362c0aeebd16 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.183458582 +0000 UTC m=+2.802517085,LastTimestamp:2026-02-24 14:49:01.183458582 +0000 UTC m=+2.802517085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.447076 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c0af448de openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.183822046 +0000 UTC m=+2.802880549,LastTimestamp:2026-02-24 14:49:01.183822046 +0000 UTC m=+2.802880549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.452953 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c13a6a5d5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.329728981 +0000 UTC m=+2.948787474,LastTimestamp:2026-02-24 14:49:01.329728981 +0000 UTC m=+2.948787474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.459729 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c1460a32a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.341917994 +0000 UTC m=+2.960976487,LastTimestamp:2026-02-24 14:49:01.341917994 +0000 UTC m=+2.960976487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.466211 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c14bf1565 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.348107621 +0000 UTC m=+2.967166114,LastTimestamp:2026-02-24 14:49:01.348107621 +0000 UTC m=+2.967166114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.472685 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c22e9f9da openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.585799642 +0000 UTC m=+3.204858165,LastTimestamp:2026-02-24 14:49:01.585799642 +0000 UTC m=+3.204858165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.479357 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897362c230a883a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.587933242 +0000 UTC m=+3.206991765,LastTimestamp:2026-02-24 14:49:01.587933242 +0000 UTC m=+3.206991765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.488999 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c2313f448 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.588550728 +0000 UTC m=+3.207609221,LastTimestamp:2026-02-24 14:49:01.588550728 +0000 UTC m=+3.207609221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.496699 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c231dffd3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.589209043 +0000 UTC m=+3.208267536,LastTimestamp:2026-02-24 14:49:01.589209043 +0000 UTC m=+3.208267536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.505407 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c232d108b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.590196363 +0000 UTC m=+3.209254856,LastTimestamp:2026-02-24 14:49:01.590196363 +0000 UTC m=+3.209254856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.513706 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897362c23fa46da openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.603645146 +0000 UTC m=+3.222703689,LastTimestamp:2026-02-24 14:49:01.603645146 +0000 UTC m=+3.222703689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.521892 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c24785314 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.611905812 +0000 UTC m=+3.230964345,LastTimestamp:2026-02-24 14:49:01.611905812 +0000 UTC m=+3.230964345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.528833 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c24aa1ee5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.615169253 +0000 UTC m=+3.234227746,LastTimestamp:2026-02-24 14:49:01.615169253 +0000 UTC m=+3.234227746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.535946 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c24aa489d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.615179933 +0000 UTC m=+3.234238466,LastTimestamp:2026-02-24 14:49:01.615179933 +0000 UTC m=+3.234238466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.541412 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c24d0ee06 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.617712646 +0000 UTC m=+3.236771139,LastTimestamp:2026-02-24 14:49:01.617712646 +0000 UTC m=+3.236771139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.547348 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c24d2c732 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.617833778 +0000 UTC m=+3.236892311,LastTimestamp:2026-02-24 14:49:01.617833778 +0000 UTC m=+3.236892311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.553743 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c258eef19 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.630164761 +0000 UTC m=+3.249223284,LastTimestamp:2026-02-24 14:49:01.630164761 +0000 UTC m=+3.249223284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.560070 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c36c60568 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.918987624 +0000 UTC m=+3.538046127,LastTimestamp:2026-02-24 14:49:01.918987624 +0000 UTC m=+3.538046127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.566017 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c370d636b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.923664747 +0000 UTC m=+3.542723250,LastTimestamp:2026-02-24 14:49:01.923664747 +0000 UTC m=+3.542723250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.572729 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c37b6af1d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.934759709 +0000 UTC m=+3.553818212,LastTimestamp:2026-02-24 14:49:01.934759709 +0000 UTC m=+3.553818212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.579139 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c37d747b9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.936895929 +0000 UTC m=+3.555954432,LastTimestamp:2026-02-24 14:49:01.936895929 +0000 UTC m=+3.555954432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.585116 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c37fe889b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.939468443 +0000 UTC m=+3.558526956,LastTimestamp:2026-02-24 14:49:01.939468443 +0000 UTC m=+3.558526956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.592289 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c3808bb0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.940136719 +0000 UTC m=+3.559195222,LastTimestamp:2026-02-24 14:49:01.940136719 +0000 UTC m=+3.559195222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.599933 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c4493bd9d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.150573469 +0000 UTC m=+3.769631962,LastTimestamp:2026-02-24 14:49:02.150573469 +0000 UTC m=+3.769631962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.605357 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c45ac8eb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.168977079 +0000 UTC m=+3.788035572,LastTimestamp:2026-02-24 14:49:02.168977079 +0000 UTC m=+3.788035572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.612097 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897362c45b910db openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.169796827 +0000 UTC m=+3.788855320,LastTimestamp:2026-02-24 14:49:02.169796827 +0000 UTC m=+3.788855320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.620132 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c46cb0380 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.187750272 +0000 UTC m=+3.806808775,LastTimestamp:2026-02-24 14:49:02.187750272 +0000 UTC m=+3.806808775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.627445 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c46e0463f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.189143615 +0000 UTC m=+3.808202118,LastTimestamp:2026-02-24 14:49:02.189143615 +0000 UTC m=+3.808202118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.634658 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c4773f2c6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.198821574 +0000 UTC m=+3.817880067,LastTimestamp:2026-02-24 14:49:02.198821574 +0000 UTC m=+3.817880067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.641060 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c51fd97d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.375614425 +0000 UTC m=+3.994672938,LastTimestamp:2026-02-24 14:49:02.375614425 +0000 UTC m=+3.994672938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.648423 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c52d70165 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.389862757 +0000 UTC m=+4.008921260,LastTimestamp:2026-02-24 14:49:02.389862757 +0000 UTC m=+4.008921260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: W0224 14:49:24.655758 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.655836 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.655962 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c52f81140 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.392029504 +0000 UTC m=+4.011088017,LastTimestamp:2026-02-24 14:49:02.392029504 +0000 UTC m=+4.011088017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.662487 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c530a3de8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.393220584 +0000 UTC m=+4.012279087,LastTimestamp:2026-02-24 14:49:02.393220584 +0000 UTC m=+4.012279087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.668529 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c5423bd15 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.411668757 +0000 UTC m=+4.030727260,LastTimestamp:2026-02-24 14:49:02.411668757 +0000 UTC m=+4.030727260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.675936 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c5df2fcfe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.576246014 +0000 UTC m=+4.195304517,LastTimestamp:2026-02-24 14:49:02.576246014 +0000 UTC m=+4.195304517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.683116 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c5eb85efc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.589181692 +0000 UTC m=+4.208240195,LastTimestamp:2026-02-24 14:49:02.589181692 +0000 UTC m=+4.208240195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.691267 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c83d3b158 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:03.21172924 +0000 UTC m=+4.830787733,LastTimestamp:2026-02-24 14:49:03.21172924 +0000 UTC m=+4.830787733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.699892 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c95043435 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:03.500121141 +0000 UTC m=+5.119179624,LastTimestamp:2026-02-24 14:49:03.500121141 +0000 UTC m=+5.119179624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.706162 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c95cfd13d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:03.513465149 +0000 UTC m=+5.132523672,LastTimestamp:2026-02-24 14:49:03.513465149 +0000 UTC m=+5.132523672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.714603 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362c95e88c44 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:03.515085892 +0000 UTC m=+5.134144385,LastTimestamp:2026-02-24 14:49:03.515085892 +0000 UTC m=+5.134144385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.721990 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362ca3f5bb5a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:03.750830938 +0000 UTC m=+5.369889481,LastTimestamp:2026-02-24 14:49:03.750830938 +0000 UTC m=+5.369889481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.729637 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362ca4fb8da2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:03.767989666 +0000 UTC m=+5.387048189,LastTimestamp:2026-02-24 14:49:03.767989666 +0000 UTC m=+5.387048189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.739055 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362ca51201ca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:03.769461194 +0000 UTC m=+5.388519717,LastTimestamp:2026-02-24 14:49:03.769461194 +0000 UTC m=+5.388519717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.745892 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362cb4f2657d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:04.035825021 +0000 UTC m=+5.654883554,LastTimestamp:2026-02-24 14:49:04.035825021 +0000 UTC m=+5.654883554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.754177 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362cb5f5dcf1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:04.052829425 +0000 UTC m=+5.671887928,LastTimestamp:2026-02-24 14:49:04.052829425 +0000 UTC m=+5.671887928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.760134 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362cb6115ea6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:04.054632102 +0000 UTC m=+5.673690635,LastTimestamp:2026-02-24 14:49:04.054632102 +0000 UTC m=+5.673690635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.768533 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362cc2f2d369 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:04.270734185 +0000 UTC m=+5.889792678,LastTimestamp:2026-02-24 14:49:04.270734185 +0000 UTC m=+5.889792678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.776111 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362cc39782df openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:04.281527007 +0000 UTC m=+5.900585500,LastTimestamp:2026-02-24 14:49:04.281527007 +0000 UTC m=+5.900585500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.784159 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362cc3a8d25b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:04.282661467 +0000 UTC m=+5.901719960,LastTimestamp:2026-02-24 14:49:04.282661467 +0000 UTC m=+5.901719960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.792549 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362cd4d616d8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:04.570840792 +0000 UTC m=+6.189899315,LastTimestamp:2026-02-24 14:49:04.570840792 +0000 UTC m=+6.189899315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.804082 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897362cd61d8d5a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:04.592301402 +0000 UTC m=+6.211359935,LastTimestamp:2026-02-24 14:49:04.592301402 +0000 UTC m=+6.211359935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.815074 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 14:49:24 crc kubenswrapper[4982]: &Event{ObjectMeta:{kube-controller-manager-crc.1897362e7ffd3606 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 24 14:49:24 crc kubenswrapper[4982]: body: Feb 24 14:49:24 crc kubenswrapper[4982]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:11.73727591 +0000 UTC m=+13.356334453,LastTimestamp:2026-02-24 14:49:11.73727591 +0000 UTC m=+13.356334453,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 14:49:24 crc kubenswrapper[4982]: > Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.823094 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362e7ffedbc0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:11.737383872 +0000 UTC m=+13.356442405,LastTimestamp:2026-02-24 14:49:11.737383872 +0000 UTC m=+13.356442405,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.832075 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 14:49:24 crc kubenswrapper[4982]: &Event{ObjectMeta:{kube-apiserver-crc.1897362f0863e3dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 14:49:24 crc kubenswrapper[4982]: body: Feb 24 14:49:24 crc kubenswrapper[4982]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:14.02570646 +0000 UTC m=+15.644764973,LastTimestamp:2026-02-24 14:49:14.02570646 +0000 UTC m=+15.644764973,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 14:49:24 crc kubenswrapper[4982]: > Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.839764 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362f0864d5b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:14.025768372 +0000 UTC m=+15.644826875,LastTimestamp:2026-02-24 14:49:14.025768372 +0000 UTC m=+15.644826875,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.845822 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 24 14:49:24 crc kubenswrapper[4982]: &Event{ObjectMeta:{kube-apiserver-crc.1897362f0de4e394 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 24 14:49:24 crc kubenswrapper[4982]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 14:49:24 crc kubenswrapper[4982]: Feb 24 14:49:24 crc kubenswrapper[4982]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:14.118046612 +0000 UTC m=+15.737105115,LastTimestamp:2026-02-24 14:49:14.118046612 +0000 UTC m=+15.737105115,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 14:49:24 crc kubenswrapper[4982]: > Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.852805 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362f0de5c337 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:14.118103863 +0000 UTC m=+15.737162376,LastTimestamp:2026-02-24 14:49:14.118103863 +0000 UTC m=+15.737162376,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.861187 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897362c530a3de8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c530a3de8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.393220584 +0000 UTC m=+4.012279087,LastTimestamp:2026-02-24 14:49:14.259373262 +0000 UTC m=+15.878431755,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.869449 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897362c5df2fcfe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c5df2fcfe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.576246014 +0000 UTC m=+4.195304517,LastTimestamp:2026-02-24 14:49:14.474316546 +0000 UTC m=+16.093375049,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.877387 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897362c5eb85efc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897362c5eb85efc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:02.589181692 +0000 UTC m=+4.208240195,LastTimestamp:2026-02-24 14:49:14.49247498 +0000 UTC m=+16.111533513,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.888215 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 14:49:24 crc kubenswrapper[4982]: &Event{ObjectMeta:{kube-controller-manager-crc.18973630d40f06f3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 14:49:24 crc kubenswrapper[4982]: body: Feb 24 14:49:24 crc kubenswrapper[4982]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:21.737664243 +0000 UTC m=+23.356722776,LastTimestamp:2026-02-24 14:49:21.737664243 +0000 UTC m=+23.356722776,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 14:49:24 crc kubenswrapper[4982]: > Feb 24 14:49:24 crc kubenswrapper[4982]: E0224 14:49:24.895889 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18973630d4100555 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:21.737729365 +0000 UTC m=+23.356787898,LastTimestamp:2026-02-24 14:49:21.737729365 +0000 UTC m=+23.356787898,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:25 crc kubenswrapper[4982]: I0224 14:49:25.075623 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:25 crc kubenswrapper[4982]: I0224 14:49:25.202410 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:25 crc kubenswrapper[4982]: I0224 14:49:25.203204 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:25 crc kubenswrapper[4982]: I0224 14:49:25.205854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:25 crc kubenswrapper[4982]: I0224 14:49:25.205926 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:25 crc kubenswrapper[4982]: I0224 14:49:25.205947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:25 crc kubenswrapper[4982]: I0224 14:49:25.207187 4982 scope.go:117] "RemoveContainer" containerID="bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095" Feb 24 14:49:25 crc kubenswrapper[4982]: E0224 14:49:25.207536 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:25 crc kubenswrapper[4982]: W0224 14:49:25.235588 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 24 14:49:25 crc kubenswrapper[4982]: E0224 14:49:25.235690 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 14:49:25 crc kubenswrapper[4982]: W0224 14:49:25.941326 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 14:49:25 crc kubenswrapper[4982]: E0224 14:49:25.941411 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 14:49:26 crc kubenswrapper[4982]: I0224 14:49:26.074282 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:27 crc kubenswrapper[4982]: I0224 14:49:27.072694 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:27 crc kubenswrapper[4982]: E0224 14:49:27.515422 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 14:49:27 crc kubenswrapper[4982]: I0224 14:49:27.522476 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:27 crc kubenswrapper[4982]: I0224 14:49:27.524373 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:27 crc kubenswrapper[4982]: I0224 14:49:27.524461 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:27 crc kubenswrapper[4982]: I0224 14:49:27.524486 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:27 crc kubenswrapper[4982]: I0224 14:49:27.524572 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:27 crc kubenswrapper[4982]: E0224 14:49:27.533238 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 14:49:28 crc kubenswrapper[4982]: I0224 14:49:28.075185 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:29 crc kubenswrapper[4982]: I0224 14:49:29.075453 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:29 crc kubenswrapper[4982]: E0224 14:49:29.263009 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 14:49:30 crc kubenswrapper[4982]: I0224 14:49:30.073168 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.072704 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.738550 4982 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.738687 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.738796 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.739056 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.741236 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.741305 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.741333 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.742327 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 14:49:31 crc kubenswrapper[4982]: I0224 14:49:31.742780 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248" gracePeriod=30 Feb 24 14:49:31 crc kubenswrapper[4982]: E0224 14:49:31.747352 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18973630d40f06f3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 24 14:49:31 crc kubenswrapper[4982]: &Event{ObjectMeta:{kube-controller-manager-crc.18973630d40f06f3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 24 14:49:31 crc kubenswrapper[4982]: body: Feb 24 14:49:31 crc kubenswrapper[4982]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:21.737664243 +0000 UTC m=+23.356722776,LastTimestamp:2026-02-24 14:49:31.738643381 +0000 UTC m=+33.357701914,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 24 14:49:31 crc kubenswrapper[4982]: > Feb 24 14:49:31 crc kubenswrapper[4982]: E0224 14:49:31.755316 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18973630d4100555\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18973630d4100555 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:21.737729365 +0000 UTC m=+23.356787898,LastTimestamp:2026-02-24 14:49:31.738743854 +0000 UTC m=+33.357802387,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:31 crc kubenswrapper[4982]: E0224 14:49:31.764487 4982 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897363328688d4e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:31.742752078 +0000 UTC m=+33.361810621,LastTimestamp:2026-02-24 14:49:31.742752078 +0000 UTC m=+33.361810621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:31 crc kubenswrapper[4982]: E0224 14:49:31.877538 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897362beb4ecbac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362beb4ecbac openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:00.65288286 +0000 UTC m=+2.271941393,LastTimestamp:2026-02-24 14:49:31.867709071 +0000 UTC m=+33.486767594,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.076267 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:32 crc kubenswrapper[4982]: E0224 14:49:32.137812 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897362c0149e93d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c0149e93d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.021661501 +0000 UTC m=+2.640720034,LastTimestamp:2026-02-24 14:49:32.12839897 +0000 UTC m=+33.747457513,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:32 crc kubenswrapper[4982]: E0224 14:49:32.151912 4982 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897362c041d51ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897362c041d51ce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:49:01.069070798 +0000 UTC m=+2.688129321,LastTimestamp:2026-02-24 14:49:32.143412051 +0000 UTC m=+33.762470574,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.357552 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.358454 4982 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248" exitCode=255 Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.358689 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248"} Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.358932 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789"} Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.359183 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.360591 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.360854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:32 crc kubenswrapper[4982]: I0224 14:49:32.361008 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:33 crc kubenswrapper[4982]: I0224 14:49:33.076039 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:33 crc kubenswrapper[4982]: I0224 14:49:33.360860 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:33 crc kubenswrapper[4982]: I0224 14:49:33.362122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:33 crc kubenswrapper[4982]: I0224 14:49:33.362163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:33 crc kubenswrapper[4982]: I0224 14:49:33.362181 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:34 crc kubenswrapper[4982]: I0224 14:49:34.072159 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:34 crc kubenswrapper[4982]: E0224 14:49:34.522282 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 14:49:34 crc kubenswrapper[4982]: I0224 14:49:34.534054 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:34 crc kubenswrapper[4982]: I0224 14:49:34.535232 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:34 crc kubenswrapper[4982]: I0224 14:49:34.535370 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:34 crc kubenswrapper[4982]: I0224 14:49:34.535452 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:34 crc kubenswrapper[4982]: I0224 14:49:34.535554 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:34 crc kubenswrapper[4982]: E0224 14:49:34.541663 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 14:49:35 crc kubenswrapper[4982]: I0224 14:49:35.075249 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:36 crc kubenswrapper[4982]: I0224 14:49:36.075329 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:37 crc kubenswrapper[4982]: I0224 14:49:37.076115 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:38 crc kubenswrapper[4982]: I0224 14:49:38.072529 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:38 crc kubenswrapper[4982]: W0224 14:49:38.452174 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:38 crc kubenswrapper[4982]: E0224 14:49:38.452314 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 14:49:38 crc kubenswrapper[4982]: W0224 14:49:38.720263 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 24 14:49:38 crc kubenswrapper[4982]: E0224 14:49:38.720368 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 14:49:38 crc kubenswrapper[4982]: I0224 14:49:38.737037 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:38 crc kubenswrapper[4982]: I0224 14:49:38.737283 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:38 crc kubenswrapper[4982]: I0224 14:49:38.738819 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:38 crc kubenswrapper[4982]: I0224 14:49:38.738874 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:38 crc kubenswrapper[4982]: I0224 14:49:38.738893 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:38 crc kubenswrapper[4982]: I0224 14:49:38.743903 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.076011 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.145469 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.146922 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.146978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.146994 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.147876 4982 scope.go:117] "RemoveContainer" containerID="bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095" Feb 24 14:49:39 crc kubenswrapper[4982]: E0224 14:49:39.263257 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.379634 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.380557 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.381128 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.381199 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:39 crc kubenswrapper[4982]: I0224 14:49:39.381226 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.075223 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.384854 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.388172 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee"} Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.388320 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.388425 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.389722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.389778 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.389796 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.389878 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.389945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:40 crc kubenswrapper[4982]: I0224 14:49:40.389968 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.076472 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.394472 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.395437 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.398820 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee" exitCode=255 Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.398913 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee"} Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.398992 4982 scope.go:117] "RemoveContainer" containerID="bb1318d93768992f92f60389372d708f28be4812c606385cc2c7986840de9095" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.399247 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.400791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.400853 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.400876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.401908 4982 scope.go:117] "RemoveContainer" containerID="482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee" Feb 24 14:49:41 crc kubenswrapper[4982]: E0224 14:49:41.402275 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:41 crc kubenswrapper[4982]: E0224 14:49:41.531546 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.542799 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.545250 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.545353 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.545386 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.545453 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:41 crc kubenswrapper[4982]: E0224 14:49:41.554108 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 14:49:41 crc kubenswrapper[4982]: I0224 14:49:41.719733 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.074970 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.120220 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.120583 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.122844 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.122920 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.122940 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.406948 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.410259 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.411765 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.411814 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.411833 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:42 crc kubenswrapper[4982]: I0224 14:49:42.412789 4982 scope.go:117] "RemoveContainer" containerID="482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee" Feb 24 14:49:42 crc kubenswrapper[4982]: E0224 14:49:42.413149 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:43 crc kubenswrapper[4982]: I0224 14:49:43.075655 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:43 crc kubenswrapper[4982]: W0224 14:49:43.659304 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 24 14:49:43 crc kubenswrapper[4982]: E0224 14:49:43.659373 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 14:49:44 crc kubenswrapper[4982]: I0224 14:49:44.075359 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:45 crc kubenswrapper[4982]: I0224 14:49:45.076298 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:45 crc kubenswrapper[4982]: I0224 14:49:45.202405 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:49:45 crc kubenswrapper[4982]: I0224 14:49:45.202787 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:45 crc kubenswrapper[4982]: I0224 14:49:45.204872 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:45 crc kubenswrapper[4982]: I0224 14:49:45.204957 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:45 crc kubenswrapper[4982]: I0224 14:49:45.204985 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:45 crc kubenswrapper[4982]: I0224 14:49:45.206118 4982 scope.go:117] "RemoveContainer" containerID="482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee" Feb 24 14:49:45 crc kubenswrapper[4982]: E0224 14:49:45.206571 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:46 crc kubenswrapper[4982]: I0224 14:49:46.077662 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:47 crc kubenswrapper[4982]: I0224 14:49:47.075766 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:48 crc kubenswrapper[4982]: I0224 14:49:48.078267 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:48 crc kubenswrapper[4982]: E0224 14:49:48.542446 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 14:49:48 crc kubenswrapper[4982]: I0224 14:49:48.554597 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:48 crc kubenswrapper[4982]: I0224 14:49:48.556361 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:48 crc kubenswrapper[4982]: I0224 14:49:48.556444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:48 crc kubenswrapper[4982]: I0224 14:49:48.556467 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:48 crc kubenswrapper[4982]: I0224 14:49:48.556710 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:48 crc kubenswrapper[4982]: E0224 14:49:48.564086 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 14:49:48 crc kubenswrapper[4982]: W0224 14:49:48.850859 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 14:49:48 crc kubenswrapper[4982]: E0224 14:49:48.850931 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 14:49:49 crc kubenswrapper[4982]: I0224 14:49:49.074856 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:49 crc kubenswrapper[4982]: E0224 14:49:49.263394 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 14:49:50 crc kubenswrapper[4982]: I0224 14:49:50.074164 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:51 crc kubenswrapper[4982]: I0224 14:49:51.076392 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:52 crc kubenswrapper[4982]: I0224 14:49:52.073734 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:53 crc kubenswrapper[4982]: I0224 14:49:53.074631 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:54 crc kubenswrapper[4982]: I0224 14:49:54.072022 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:54 crc kubenswrapper[4982]: I0224 14:49:54.951968 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 14:49:54 crc kubenswrapper[4982]: I0224 14:49:54.952298 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:54 crc kubenswrapper[4982]: I0224 14:49:54.953970 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:54 crc kubenswrapper[4982]: I0224 14:49:54.954045 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:54 crc kubenswrapper[4982]: I0224 14:49:54.954063 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:55 crc kubenswrapper[4982]: I0224 14:49:55.076091 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:55 crc kubenswrapper[4982]: E0224 14:49:55.549698 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 14:49:55 crc kubenswrapper[4982]: I0224 14:49:55.565214 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:55 crc kubenswrapper[4982]: I0224 14:49:55.567193 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:55 crc kubenswrapper[4982]: I0224 14:49:55.567255 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:55 crc kubenswrapper[4982]: I0224 14:49:55.567276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:55 crc kubenswrapper[4982]: I0224 14:49:55.567380 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:49:55 crc kubenswrapper[4982]: E0224 14:49:55.574125 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 14:49:56 crc kubenswrapper[4982]: I0224 14:49:56.073479 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:57 crc kubenswrapper[4982]: I0224 14:49:57.074841 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:57 crc kubenswrapper[4982]: I0224 14:49:57.145204 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:49:57 crc kubenswrapper[4982]: I0224 14:49:57.147022 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:49:57 crc kubenswrapper[4982]: I0224 14:49:57.147080 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:49:57 crc kubenswrapper[4982]: I0224 14:49:57.147099 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:49:57 crc kubenswrapper[4982]: I0224 14:49:57.148006 4982 scope.go:117] "RemoveContainer" containerID="482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee" Feb 24 14:49:57 crc kubenswrapper[4982]: E0224 14:49:57.148344 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 14:49:58 crc kubenswrapper[4982]: I0224 14:49:58.074973 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:59 crc kubenswrapper[4982]: I0224 14:49:59.075184 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:49:59 crc kubenswrapper[4982]: E0224 14:49:59.263699 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 14:50:00 crc kubenswrapper[4982]: I0224 14:50:00.074877 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:50:01 crc kubenswrapper[4982]: I0224 14:50:01.073931 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:50:02 crc kubenswrapper[4982]: I0224 14:50:02.075755 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:50:02 crc kubenswrapper[4982]: E0224 14:50:02.560170 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 14:50:02 crc kubenswrapper[4982]: I0224 14:50:02.574549 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:50:02 crc kubenswrapper[4982]: I0224 14:50:02.576473 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:02 crc kubenswrapper[4982]: I0224 14:50:02.576585 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:02 crc kubenswrapper[4982]: I0224 14:50:02.576612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:02 crc kubenswrapper[4982]: I0224 14:50:02.576663 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:50:02 crc kubenswrapper[4982]: E0224 14:50:02.584773 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 24 14:50:03 crc kubenswrapper[4982]: I0224 14:50:03.075213 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:50:04 crc kubenswrapper[4982]: I0224 14:50:04.074284 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 14:50:04 crc kubenswrapper[4982]: I0224 14:50:04.772138 4982 csr.go:261] certificate signing request csr-rwpq2 is approved, waiting to be issued Feb 24 14:50:04 crc kubenswrapper[4982]: I0224 14:50:04.782465 4982 csr.go:257] certificate signing request csr-rwpq2 is issued Feb 24 14:50:04 crc kubenswrapper[4982]: I0224 14:50:04.851992 4982 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 14:50:04 crc kubenswrapper[4982]: I0224 14:50:04.912348 4982 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 14:50:05 crc kubenswrapper[4982]: I0224 14:50:05.784043 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-20 00:02:06.821869288 +0000 UTC Feb 24 14:50:05 crc kubenswrapper[4982]: I0224 14:50:05.784137 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7161h12m1.03773938s for next certificate rotation Feb 24 14:50:07 crc kubenswrapper[4982]: I0224 14:50:07.244639 4982 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.090860 4982 apiserver.go:52] "Watching apiserver" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.096449 4982 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.098026 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hg2sm","openshift-network-diagnostics/network-check-target-xd92c","openshift-image-registry/node-ca-ccj66","openshift-machine-config-operator/machine-config-daemon-b79sf","openshift-multus/multus-additional-cni-plugins-lknrx","openshift-multus/network-metrics-daemon-6gwqq","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/iptables-alerter-4ln5h","openshift-ovn-kubernetes/ovnkube-node-96fkj","openshift-multus/multus-jgtdj","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq"] Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.098591 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.098687 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.098757 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.098838 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.099439 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.099595 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.099635 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.099639 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.100173 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.100079 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.101060 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.101467 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.101664 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.101779 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hg2sm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.101680 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.102487 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.105796 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.105853 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106279 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106447 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106580 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106656 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106820 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106850 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106826 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106949 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.106969 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107044 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107130 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107172 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107209 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107295 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107311 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107376 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107446 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107460 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107471 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.107584 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.109065 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.109157 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.109188 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.109197 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.109217 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.109332 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.109350 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.109850 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.112006 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.112029 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.112041 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.112105 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.113990 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.114202 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.114367 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.114436 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.114697 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.126885 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.135794 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.142701 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.151744 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.157362 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.157523 4982 scope.go:117] "RemoveContainer" containerID="482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.160731 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.167596 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.177997 4982 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.188474 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.200541 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.213351 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.235123 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.247057 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.257193 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.266459 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273037 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273095 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273121 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273143 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273163 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273188 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273210 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273232 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273254 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273277 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273299 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273320 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273341 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273365 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273386 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273410 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273432 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273459 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273480 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273519 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273539 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273562 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273586 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273608 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273628 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273647 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273667 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273687 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273708 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273729 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273752 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273777 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273800 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273822 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273822 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273849 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273871 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273892 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273914 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273936 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273956 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.273978 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274000 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274024 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274045 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274067 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274090 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274111 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274113 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274132 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274160 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274179 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274201 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274222 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274245 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274266 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274289 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274310 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274332 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274352 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274359 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274362 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274372 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274440 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274477 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274533 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274569 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274602 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274762 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274807 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274853 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274888 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274926 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274961 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274995 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275030 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275067 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275095 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275119 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275143 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275170 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275195 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275219 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275241 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275263 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275290 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275314 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275339 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275366 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275390 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275413 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275437 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275461 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275484 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275530 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275554 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275580 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275603 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275625 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275648 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275669 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275692 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275713 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275736 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275760 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275921 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275784 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277254 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277313 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277371 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277431 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277466 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277523 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277562 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277631 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277677 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277717 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277750 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277774 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277799 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277824 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278006 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278101 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278131 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278157 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278180 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278203 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278234 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278274 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278362 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278445 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278521 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278638 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278726 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278833 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278873 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278906 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.274989 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275109 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279916 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275123 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275403 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275527 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275682 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275813 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.275913 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.276162 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.276340 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279992 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.276854 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277041 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277115 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277231 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277280 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277261 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277699 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.277772 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278428 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.278470 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279031 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279213 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279311 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279379 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279439 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279594 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279619 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279671 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.279717 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.276356 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.280020 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.280222 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.280334 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.281281 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.281671 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.281813 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.282256 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.282742 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.282700 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283301 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283343 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283412 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283421 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283491 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283616 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283576 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283318 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.283988 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.284321 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.284445 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.286002 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.286131 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.286269 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.286606 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.286693 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.286827 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.286859 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.287161 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.287281 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.287675 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.287778 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.288122 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.288274 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.288398 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.289905 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.290551 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.290612 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.290925 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.287171 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.291600 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.292304 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.292373 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.292421 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.293346 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.293520 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:50:08.793480858 +0000 UTC m=+70.412539341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.293989 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.295108 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.295201 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.294604 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.296408 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.296916 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.297572 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.297915 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.298592 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.297913 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.299800 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.298297 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.298407 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.298561 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.298766 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.298761 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300093 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300193 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300240 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300290 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300317 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300379 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300428 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300462 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300539 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300563 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300620 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300710 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300736 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300795 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300843 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.298872 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300906 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.300966 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301061 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301114 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301151 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301161 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.299099 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.299325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301281 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.299454 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.299670 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301340 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301390 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301388 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301418 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301469 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301530 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301557 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301587 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301680 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301785 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301835 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.301864 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302019 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302084 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302126 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302167 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302216 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302256 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302283 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302355 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302440 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302691 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302770 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302801 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302951 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302992 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303019 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303081 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-cni-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303122 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/465fb356-3c99-4881-81aa-0cad744fd120-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303158 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303182 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-log-socket\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303205 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-netd\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303228 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-kubelet\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303249 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/86687a8a-6996-44fa-a62e-b43266c31922-multus-daemon-config\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303274 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8556181-42f0-45af-8922-fd147917bce5-host\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303298 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8556181-42f0-45af-8922-fd147917bce5-serviceca\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303322 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-system-cni-dir\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303351 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5d426fc2-19af-43bc-a39c-c63afb2d9909-hosts-file\") pod \"node-resolver-hg2sm\" (UID: \"5d426fc2-19af-43bc-a39c-c63afb2d9909\") " pod="openshift-dns/node-resolver-hg2sm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303375 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303396 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-slash\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-cni-bin\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303443 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-hostroot\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303472 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303522 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303597 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303625 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bf688571-4e47-42da-80b4-0d54580ce6c8-proxy-tls\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303651 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-cnibin\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303675 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91cccac8-913c-4bcf-a654-298dfce0a471-ovn-node-metrics-cert\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303702 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/465fb356-3c99-4881-81aa-0cad744fd120-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303733 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303756 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-config\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303785 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf688571-4e47-42da-80b4-0d54580ce6c8-mcd-auth-proxy-config\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303623 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.302681 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303089 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303169 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303170 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303381 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303837 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303864 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304319 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.303855 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304383 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-cnibin\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304458 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-socket-dir-parent\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304490 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-multus-certs\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304553 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-systemd\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304580 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-bin\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304604 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86687a8a-6996-44fa-a62e-b43266c31922-cni-binary-copy\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304628 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-conf-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304644 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304657 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-netns\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304690 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-node-log\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304716 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66lkv\" (UniqueName: \"kubernetes.io/projected/c8556181-42f0-45af-8922-fd147917bce5-kube-api-access-66lkv\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304756 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304776 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-var-lib-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304927 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304801 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-os-release\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.304984 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305115 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhcxb\" (UniqueName: \"kubernetes.io/projected/bf688571-4e47-42da-80b4-0d54580ce6c8-kube-api-access-rhcxb\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305187 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-ovn\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305220 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-system-cni-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305246 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-k8s-cni-cncf-io\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305277 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305303 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/42019c71-4e1e-4a98-aee6-91061deb320a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305330 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305356 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-etc-kubernetes\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305384 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305410 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-kubelet\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305436 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-systemd-units\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305459 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-ovn-kubernetes\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305485 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305490 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305538 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-cni-multus\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305569 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/465fb356-3c99-4881-81aa-0cad744fd120-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305602 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305635 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305663 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhm4\" (UniqueName: \"kubernetes.io/projected/42019c71-4e1e-4a98-aee6-91061deb320a-kube-api-access-7fhm4\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305691 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxw68\" (UniqueName: \"kubernetes.io/projected/99337e5a-7ecb-4ed1-8ec5-14979be84e68-kube-api-access-jxw68\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305723 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-env-overrides\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305751 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-netns\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305780 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4kz\" (UniqueName: \"kubernetes.io/projected/465fb356-3c99-4881-81aa-0cad744fd120-kube-api-access-fv4kz\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305811 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305841 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/42019c71-4e1e-4a98-aee6-91061deb320a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305872 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305905 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305327 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.305745 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.306135 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-etc-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.306177 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-script-lib\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.306204 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjhfr\" (UniqueName: \"kubernetes.io/projected/91cccac8-913c-4bcf-a654-298dfce0a471-kube-api-access-tjhfr\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.306229 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jzhg\" (UniqueName: \"kubernetes.io/projected/86687a8a-6996-44fa-a62e-b43266c31922-kube-api-access-6jzhg\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.306326 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.306453 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.306535 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307027 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307193 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bf688571-4e47-42da-80b4-0d54580ce6c8-rootfs\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307246 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-os-release\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307285 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbzb9\" (UniqueName: \"kubernetes.io/projected/5d426fc2-19af-43bc-a39c-c63afb2d9909-kube-api-access-sbzb9\") pod \"node-resolver-hg2sm\" (UID: \"5d426fc2-19af-43bc-a39c-c63afb2d9909\") " pod="openshift-dns/node-resolver-hg2sm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307321 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307453 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307526 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307549 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.307927 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.308027 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.308079 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.308230 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.308318 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.308641 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.308340 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.310047 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.310175 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.311038 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.311819 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.311963 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.312222 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.312445 4982 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.313278 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.313696 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.313775 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.313859 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:08.81383641 +0000 UTC m=+70.432894913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.314688 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.314794 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:08.814765774 +0000 UTC m=+70.433824277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.314883 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.315410 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.315829 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.316734 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.317787 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.318302 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.318324 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.318341 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.318366 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.318409 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:08.818391449 +0000 UTC m=+70.437449952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.319044 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.319174 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.319887 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.319970 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320143 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320183 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320221 4982 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320244 4982 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320269 4982 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320293 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320314 4982 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320334 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320358 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320403 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320832 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.321084 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.321746 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.321798 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.321879 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:08.821861349 +0000 UTC m=+70.440919842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.320890 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.321989 4982 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322029 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322051 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322100 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322120 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322151 4982 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322168 4982 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322221 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322254 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.321412 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322269 4982 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.321414 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322319 4982 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322377 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.321646 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322409 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322512 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322584 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322572 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322659 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322674 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322687 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322699 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322711 4982 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322724 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322742 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322754 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322751 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322769 4982 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322784 4982 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322799 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322811 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322822 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322834 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322846 4982 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322857 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322867 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322610 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.322879 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.323188 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.323343 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.323398 4982 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.323438 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.323614 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.323780 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.323837 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.323966 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324024 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324122 4982 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324245 4982 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324582 4982 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324607 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324622 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324636 4982 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324650 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324666 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324680 4982 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324693 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324707 4982 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324719 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324733 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324745 4982 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324758 4982 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324771 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324783 4982 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324796 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324810 4982 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324827 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324840 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324852 4982 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324867 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324881 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324894 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324912 4982 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324925 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324940 4982 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324953 4982 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324966 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324978 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.324990 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325003 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325017 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325048 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325067 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325082 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325094 4982 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325107 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325119 4982 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325132 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325446 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325741 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325775 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.325805 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.326078 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.326107 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.326212 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.326274 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.326706 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.326820 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.327080 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.327174 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.327316 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.327385 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.327410 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.327416 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.328088 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.329455 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.329723 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.329743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.329812 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.329881 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.330152 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.330286 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.330488 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.330535 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.330723 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.331279 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.331341 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.332877 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.332901 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.332928 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.333209 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.333254 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.333334 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.333643 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.334532 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.334654 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.335871 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.336099 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.336099 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.338883 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.339172 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.339180 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.339542 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.339586 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.339757 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.339880 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.340092 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.340180 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.341037 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.341524 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.345191 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.361348 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.363491 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.373752 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.374493 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426161 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-var-lib-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426221 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-os-release\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426247 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-ovn\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-system-cni-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426292 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-k8s-cni-cncf-io\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426330 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhcxb\" (UniqueName: \"kubernetes.io/projected/bf688571-4e47-42da-80b4-0d54580ce6c8-kube-api-access-rhcxb\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426359 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-etc-kubernetes\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426392 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/42019c71-4e1e-4a98-aee6-91061deb320a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426415 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426443 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-cni-multus\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426515 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/465fb356-3c99-4881-81aa-0cad744fd120-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426530 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-os-release\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426546 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-kubelet\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426599 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-kubelet\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426618 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-var-lib-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426642 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-systemd-units\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426655 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-ovn\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426689 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-ovn-kubernetes\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426701 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-systemd-units\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426716 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-env-overrides\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426741 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-netns\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426762 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4kz\" (UniqueName: \"kubernetes.io/projected/465fb356-3c99-4881-81aa-0cad744fd120-kube-api-access-fv4kz\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426786 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fhm4\" (UniqueName: \"kubernetes.io/projected/42019c71-4e1e-4a98-aee6-91061deb320a-kube-api-access-7fhm4\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426803 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxw68\" (UniqueName: \"kubernetes.io/projected/99337e5a-7ecb-4ed1-8ec5-14979be84e68-kube-api-access-jxw68\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426813 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-system-cni-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426820 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-etc-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426843 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-etc-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426850 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-script-lib\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426866 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-ovn-kubernetes\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426875 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjhfr\" (UniqueName: \"kubernetes.io/projected/91cccac8-913c-4bcf-a654-298dfce0a471-kube-api-access-tjhfr\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426901 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jzhg\" (UniqueName: \"kubernetes.io/projected/86687a8a-6996-44fa-a62e-b43266c31922-kube-api-access-6jzhg\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426925 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/42019c71-4e1e-4a98-aee6-91061deb320a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426951 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bf688571-4e47-42da-80b4-0d54580ce6c8-rootfs\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426974 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-os-release\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.426998 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbzb9\" (UniqueName: \"kubernetes.io/projected/5d426fc2-19af-43bc-a39c-c63afb2d9909-kube-api-access-sbzb9\") pod \"node-resolver-hg2sm\" (UID: \"5d426fc2-19af-43bc-a39c-c63afb2d9909\") " pod="openshift-dns/node-resolver-hg2sm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427022 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427056 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-cni-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427087 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/465fb356-3c99-4881-81aa-0cad744fd120-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427117 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-kubelet\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/86687a8a-6996-44fa-a62e-b43266c31922-multus-daemon-config\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427165 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8556181-42f0-45af-8922-fd147917bce5-host\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427186 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8556181-42f0-45af-8922-fd147917bce5-serviceca\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427234 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-log-socket\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427257 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-netd\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427278 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-slash\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427312 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-cni-bin\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427337 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-hostroot\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427355 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-env-overrides\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427366 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-system-cni-dir\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427408 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427422 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5d426fc2-19af-43bc-a39c-c63afb2d9909-hosts-file\") pod \"node-resolver-hg2sm\" (UID: \"5d426fc2-19af-43bc-a39c-c63afb2d9909\") " pod="openshift-dns/node-resolver-hg2sm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427442 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-k8s-cni-cncf-io\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427516 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427609 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-netns\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427764 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-openvswitch\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427830 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-etc-kubernetes\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427875 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-cni-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.428065 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-system-cni-dir\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.428181 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-log-socket\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.428229 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-kubelet\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.428665 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.428761 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs podName:99337e5a-7ecb-4ed1-8ec5-14979be84e68 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:08.928736931 +0000 UTC m=+70.547795644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs") pod "network-metrics-daemon-6gwqq" (UID: "99337e5a-7ecb-4ed1-8ec5-14979be84e68") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.428819 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-cni-bin\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.427444 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429005 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-slash\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.428989 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8556181-42f0-45af-8922-fd147917bce5-host\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429046 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-netd\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429065 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bf688571-4e47-42da-80b4-0d54580ce6c8-rootfs\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429091 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/42019c71-4e1e-4a98-aee6-91061deb320a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429144 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-hostroot\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429275 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-os-release\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429382 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/465fb356-3c99-4881-81aa-0cad744fd120-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429478 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bf688571-4e47-42da-80b4-0d54580ce6c8-proxy-tls\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429591 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-cnibin\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429666 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91cccac8-913c-4bcf-a654-298dfce0a471-ovn-node-metrics-cert\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.428924 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429942 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-config\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.429994 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-cnibin\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.430027 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-socket-dir-parent\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.430060 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-multus-certs\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.430100 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf688571-4e47-42da-80b4-0d54580ce6c8-mcd-auth-proxy-config\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.430229 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5d426fc2-19af-43bc-a39c-c63afb2d9909-hosts-file\") pod \"node-resolver-hg2sm\" (UID: \"5d426fc2-19af-43bc-a39c-c63afb2d9909\") " pod="openshift-dns/node-resolver-hg2sm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.430822 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431024 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/86687a8a-6996-44fa-a62e-b43266c31922-multus-daemon-config\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431209 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/42019c71-4e1e-4a98-aee6-91061deb320a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431318 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-conf-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431404 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-systemd\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431494 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-bin\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431602 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86687a8a-6996-44fa-a62e-b43266c31922-cni-binary-copy\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431706 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-netns\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431782 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-node-log\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431832 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-config\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431862 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-script-lib\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.431939 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66lkv\" (UniqueName: \"kubernetes.io/projected/c8556181-42f0-45af-8922-fd147917bce5-kube-api-access-66lkv\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.432491 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-run-multus-certs\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.432736 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-node-log\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433152 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bf688571-4e47-42da-80b4-0d54580ce6c8-mcd-auth-proxy-config\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433207 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-bin\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433235 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-conf-dir\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433257 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-systemd\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433291 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-host-var-lib-cni-multus\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433320 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/42019c71-4e1e-4a98-aee6-91061deb320a-cnibin\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433430 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-netns\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433489 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-cnibin\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.433588 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/86687a8a-6996-44fa-a62e-b43266c31922-multus-socket-dir-parent\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.434035 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86687a8a-6996-44fa-a62e-b43266c31922-cni-binary-copy\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.434130 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/465fb356-3c99-4881-81aa-0cad744fd120-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.434321 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c8556181-42f0-45af-8922-fd147917bce5-serviceca\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.439412 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/465fb356-3c99-4881-81aa-0cad744fd120-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.445269 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/465fb356-3c99-4881-81aa-0cad744fd120-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.447724 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxw68\" (UniqueName: \"kubernetes.io/projected/99337e5a-7ecb-4ed1-8ec5-14979be84e68-kube-api-access-jxw68\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.447937 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bf688571-4e47-42da-80b4-0d54580ce6c8-proxy-tls\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448093 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448114 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448125 4982 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448138 4982 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448151 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448164 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448175 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448186 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448197 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448207 4982 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448220 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448230 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448240 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448249 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448258 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448268 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448278 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448287 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448298 4982 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448308 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448317 4982 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448326 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448338 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448347 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448356 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448366 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448375 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448462 4982 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448481 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448492 4982 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448514 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448524 4982 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448533 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448543 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448555 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448565 4982 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448574 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448583 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448594 4982 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448602 4982 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448611 4982 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448621 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448629 4982 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448638 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448648 4982 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448657 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448666 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448676 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448686 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448695 4982 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448704 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448714 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448725 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448736 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448745 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448755 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448764 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448774 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448783 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448791 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448800 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448809 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448818 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448829 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448838 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448849 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448858 4982 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448866 4982 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448875 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448885 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448895 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448903 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448913 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448921 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448930 4982 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448938 4982 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448947 4982 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448957 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448983 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.448992 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449001 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449011 4982 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449897 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449912 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449921 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449934 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449943 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449953 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449992 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450003 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450012 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450022 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450031 4982 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450040 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450051 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450061 4982 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450074 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450083 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450093 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450103 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450123 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450132 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450140 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450557 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.450571 4982 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.449328 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91cccac8-913c-4bcf-a654-298dfce0a471-ovn-node-metrics-cert\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.451672 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbzb9\" (UniqueName: \"kubernetes.io/projected/5d426fc2-19af-43bc-a39c-c63afb2d9909-kube-api-access-sbzb9\") pod \"node-resolver-hg2sm\" (UID: \"5d426fc2-19af-43bc-a39c-c63afb2d9909\") " pod="openshift-dns/node-resolver-hg2sm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.454655 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjhfr\" (UniqueName: \"kubernetes.io/projected/91cccac8-913c-4bcf-a654-298dfce0a471-kube-api-access-tjhfr\") pod \"ovnkube-node-96fkj\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.455131 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fhm4\" (UniqueName: \"kubernetes.io/projected/42019c71-4e1e-4a98-aee6-91061deb320a-kube-api-access-7fhm4\") pod \"multus-additional-cni-plugins-lknrx\" (UID: \"42019c71-4e1e-4a98-aee6-91061deb320a\") " pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.457111 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4kz\" (UniqueName: \"kubernetes.io/projected/465fb356-3c99-4881-81aa-0cad744fd120-kube-api-access-fv4kz\") pod \"ovnkube-control-plane-749d76644c-z9jtq\" (UID: \"465fb356-3c99-4881-81aa-0cad744fd120\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.458740 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.458628 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jzhg\" (UniqueName: \"kubernetes.io/projected/86687a8a-6996-44fa-a62e-b43266c31922-kube-api-access-6jzhg\") pod \"multus-jgtdj\" (UID: \"86687a8a-6996-44fa-a62e-b43266c31922\") " pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.459517 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhcxb\" (UniqueName: \"kubernetes.io/projected/bf688571-4e47-42da-80b4-0d54580ce6c8-kube-api-access-rhcxb\") pod \"machine-config-daemon-b79sf\" (UID: \"bf688571-4e47-42da-80b4-0d54580ce6c8\") " pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.460336 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.462164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66lkv\" (UniqueName: \"kubernetes.io/projected/c8556181-42f0-45af-8922-fd147917bce5-kube-api-access-66lkv\") pod \"node-ca-ccj66\" (UID: \"c8556181-42f0-45af-8922-fd147917bce5\") " pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.462293 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hg2sm" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.475337 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 14:50:08 crc kubenswrapper[4982]: W0224 14:50:08.480106 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d426fc2_19af_43bc_a39c_c63afb2d9909.slice/crio-5224b9a9e5d6bc89291607822452820e8401d3fbee488e7929d7e0abf9e1bc5b WatchSource:0}: Error finding container 5224b9a9e5d6bc89291607822452820e8401d3fbee488e7929d7e0abf9e1bc5b: Status 404 returned error can't find the container with id 5224b9a9e5d6bc89291607822452820e8401d3fbee488e7929d7e0abf9e1bc5b Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.483746 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.483972 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 24 14:50:08 crc kubenswrapper[4982]: set -uo pipefail Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 24 14:50:08 crc kubenswrapper[4982]: HOSTS_FILE="/etc/hosts" Feb 24 14:50:08 crc kubenswrapper[4982]: TEMP_FILE="/etc/hosts.tmp" Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # Make a temporary file with the old hosts file's attributes. Feb 24 14:50:08 crc kubenswrapper[4982]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 24 14:50:08 crc kubenswrapper[4982]: echo "Failed to preserve hosts file. Exiting." Feb 24 14:50:08 crc kubenswrapper[4982]: exit 1 Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: while true; do Feb 24 14:50:08 crc kubenswrapper[4982]: declare -A svc_ips Feb 24 14:50:08 crc kubenswrapper[4982]: for svc in "${services[@]}"; do Feb 24 14:50:08 crc kubenswrapper[4982]: # Fetch service IP from cluster dns if present. We make several tries Feb 24 14:50:08 crc kubenswrapper[4982]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 24 14:50:08 crc kubenswrapper[4982]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 24 14:50:08 crc kubenswrapper[4982]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 24 14:50:08 crc kubenswrapper[4982]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 14:50:08 crc kubenswrapper[4982]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 14:50:08 crc kubenswrapper[4982]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 14:50:08 crc kubenswrapper[4982]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 24 14:50:08 crc kubenswrapper[4982]: for i in ${!cmds[*]} Feb 24 14:50:08 crc kubenswrapper[4982]: do Feb 24 14:50:08 crc kubenswrapper[4982]: ips=($(eval "${cmds[i]}")) Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: svc_ips["${svc}"]="${ips[@]}" Feb 24 14:50:08 crc kubenswrapper[4982]: break Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # Update /etc/hosts only if we get valid service IPs Feb 24 14:50:08 crc kubenswrapper[4982]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 24 14:50:08 crc kubenswrapper[4982]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 24 14:50:08 crc kubenswrapper[4982]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 24 14:50:08 crc kubenswrapper[4982]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 24 14:50:08 crc kubenswrapper[4982]: sleep 60 & wait Feb 24 14:50:08 crc kubenswrapper[4982]: continue Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # Append resolver entries for services Feb 24 14:50:08 crc kubenswrapper[4982]: rc=0 Feb 24 14:50:08 crc kubenswrapper[4982]: for svc in "${!svc_ips[@]}"; do Feb 24 14:50:08 crc kubenswrapper[4982]: for ip in ${svc_ips[${svc}]}; do Feb 24 14:50:08 crc kubenswrapper[4982]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ $rc -ne 0 ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: sleep 60 & wait Feb 24 14:50:08 crc kubenswrapper[4982]: continue Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 24 14:50:08 crc kubenswrapper[4982]: # Replace /etc/hosts with our modified version if needed Feb 24 14:50:08 crc kubenswrapper[4982]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 24 14:50:08 crc kubenswrapper[4982]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: sleep 60 & wait Feb 24 14:50:08 crc kubenswrapper[4982]: unset svc_ips Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbzb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-hg2sm_openshift-dns(5d426fc2-19af-43bc-a39c-c63afb2d9909): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.486465 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-hg2sm" podUID="5d426fc2-19af-43bc-a39c-c63afb2d9909" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.490966 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ -f "/env/_master" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: set -o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: source "/env/_master" Feb 24 14:50:08 crc kubenswrapper[4982]: set +o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 14:50:08 crc kubenswrapper[4982]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 14:50:08 crc kubenswrapper[4982]: ho_enable="--enable-hybrid-overlay" Feb 24 14:50:08 crc kubenswrapper[4982]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 14:50:08 crc kubenswrapper[4982]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 14:50:08 crc kubenswrapper[4982]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 14:50:08 crc kubenswrapper[4982]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 14:50:08 crc kubenswrapper[4982]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --webhook-host=127.0.0.1 \ Feb 24 14:50:08 crc kubenswrapper[4982]: --webhook-port=9743 \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${ho_enable} \ Feb 24 14:50:08 crc kubenswrapper[4982]: --enable-interconnect \ Feb 24 14:50:08 crc kubenswrapper[4982]: --disable-approver \ Feb 24 14:50:08 crc kubenswrapper[4982]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --wait-for-kubernetes-api=200s \ Feb 24 14:50:08 crc kubenswrapper[4982]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --loglevel="${LOGLEVEL}" Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.492398 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ccj66" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.495524 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ -f "/env/_master" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: set -o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: source "/env/_master" Feb 24 14:50:08 crc kubenswrapper[4982]: set +o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 14:50:08 crc kubenswrapper[4982]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 14:50:08 crc kubenswrapper[4982]: --disable-webhook \ Feb 24 14:50:08 crc kubenswrapper[4982]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --loglevel="${LOGLEVEL}" Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.496177 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.496620 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.498117 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8"} Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.498421 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.498865 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"660c501958f3b2a35d1c865f6cedf7c3e75a2cf89c751ad1179e4ec1689377b0"} Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.499602 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hg2sm" event={"ID":"5d426fc2-19af-43bc-a39c-c63afb2d9909","Type":"ContainerStarted","Data":"5224b9a9e5d6bc89291607822452820e8401d3fbee488e7929d7e0abf9e1bc5b"} Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.499912 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.500335 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7ca42015170aeff27ee5c0a931c5d7cfd2764f15e136935c2de6e84dd9d8745b"} Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.501920 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Feb 24 14:50:08 crc kubenswrapper[4982]: set -uo pipefail Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 24 14:50:08 crc kubenswrapper[4982]: HOSTS_FILE="/etc/hosts" Feb 24 14:50:08 crc kubenswrapper[4982]: TEMP_FILE="/etc/hosts.tmp" Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # Make a temporary file with the old hosts file's attributes. Feb 24 14:50:08 crc kubenswrapper[4982]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 24 14:50:08 crc kubenswrapper[4982]: echo "Failed to preserve hosts file. Exiting." Feb 24 14:50:08 crc kubenswrapper[4982]: exit 1 Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: while true; do Feb 24 14:50:08 crc kubenswrapper[4982]: declare -A svc_ips Feb 24 14:50:08 crc kubenswrapper[4982]: for svc in "${services[@]}"; do Feb 24 14:50:08 crc kubenswrapper[4982]: # Fetch service IP from cluster dns if present. We make several tries Feb 24 14:50:08 crc kubenswrapper[4982]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 24 14:50:08 crc kubenswrapper[4982]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 24 14:50:08 crc kubenswrapper[4982]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 24 14:50:08 crc kubenswrapper[4982]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 14:50:08 crc kubenswrapper[4982]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 14:50:08 crc kubenswrapper[4982]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 24 14:50:08 crc kubenswrapper[4982]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 24 14:50:08 crc kubenswrapper[4982]: for i in ${!cmds[*]} Feb 24 14:50:08 crc kubenswrapper[4982]: do Feb 24 14:50:08 crc kubenswrapper[4982]: ips=($(eval "${cmds[i]}")) Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: svc_ips["${svc}"]="${ips[@]}" Feb 24 14:50:08 crc kubenswrapper[4982]: break Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # Update /etc/hosts only if we get valid service IPs Feb 24 14:50:08 crc kubenswrapper[4982]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 24 14:50:08 crc kubenswrapper[4982]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 24 14:50:08 crc kubenswrapper[4982]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 24 14:50:08 crc kubenswrapper[4982]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 24 14:50:08 crc kubenswrapper[4982]: sleep 60 & wait Feb 24 14:50:08 crc kubenswrapper[4982]: continue Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # Append resolver entries for services Feb 24 14:50:08 crc kubenswrapper[4982]: rc=0 Feb 24 14:50:08 crc kubenswrapper[4982]: for svc in "${!svc_ips[@]}"; do Feb 24 14:50:08 crc kubenswrapper[4982]: for ip in ${svc_ips[${svc}]}; do Feb 24 14:50:08 crc kubenswrapper[4982]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ $rc -ne 0 ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: sleep 60 & wait Feb 24 14:50:08 crc kubenswrapper[4982]: continue Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 24 14:50:08 crc kubenswrapper[4982]: # Replace /etc/hosts with our modified version if needed Feb 24 14:50:08 crc kubenswrapper[4982]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 24 14:50:08 crc kubenswrapper[4982]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: sleep 60 & wait Feb 24 14:50:08 crc kubenswrapper[4982]: unset svc_ips Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbzb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-hg2sm_openshift-dns(5d426fc2-19af-43bc-a39c-c63afb2d9909): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.501986 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ -f "/env/_master" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: set -o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: source "/env/_master" Feb 24 14:50:08 crc kubenswrapper[4982]: set +o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 14:50:08 crc kubenswrapper[4982]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 14:50:08 crc kubenswrapper[4982]: ho_enable="--enable-hybrid-overlay" Feb 24 14:50:08 crc kubenswrapper[4982]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 14:50:08 crc kubenswrapper[4982]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 14:50:08 crc kubenswrapper[4982]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 14:50:08 crc kubenswrapper[4982]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 14:50:08 crc kubenswrapper[4982]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --webhook-host=127.0.0.1 \ Feb 24 14:50:08 crc kubenswrapper[4982]: --webhook-port=9743 \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${ho_enable} \ Feb 24 14:50:08 crc kubenswrapper[4982]: --enable-interconnect \ Feb 24 14:50:08 crc kubenswrapper[4982]: --disable-approver \ Feb 24 14:50:08 crc kubenswrapper[4982]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --wait-for-kubernetes-api=200s \ Feb 24 14:50:08 crc kubenswrapper[4982]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --loglevel="${LOGLEVEL}" Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.502195 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.503169 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-hg2sm" podUID="5d426fc2-19af-43bc-a39c-c63afb2d9909" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.503275 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 14:50:08 crc kubenswrapper[4982]: W0224 14:50:08.503358 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91cccac8_913c_4bcf_a654_298dfce0a471.slice/crio-68f0edfaec0a349fa2bca761d212b8897185bcd1ed3f8156ce092422d8f7c3b0 WatchSource:0}: Error finding container 68f0edfaec0a349fa2bca761d212b8897185bcd1ed3f8156ce092422d8f7c3b0: Status 404 returned error can't find the container with id 68f0edfaec0a349fa2bca761d212b8897185bcd1ed3f8156ce092422d8f7c3b0 Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.527029 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 24 14:50:08 crc kubenswrapper[4982]: apiVersion: v1 Feb 24 14:50:08 crc kubenswrapper[4982]: clusters: Feb 24 14:50:08 crc kubenswrapper[4982]: - cluster: Feb 24 14:50:08 crc kubenswrapper[4982]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 24 14:50:08 crc kubenswrapper[4982]: server: https://api-int.crc.testing:6443 Feb 24 14:50:08 crc kubenswrapper[4982]: name: default-cluster Feb 24 14:50:08 crc kubenswrapper[4982]: contexts: Feb 24 14:50:08 crc kubenswrapper[4982]: - context: Feb 24 14:50:08 crc kubenswrapper[4982]: cluster: default-cluster Feb 24 14:50:08 crc kubenswrapper[4982]: namespace: default Feb 24 14:50:08 crc kubenswrapper[4982]: user: default-auth Feb 24 14:50:08 crc kubenswrapper[4982]: name: default-context Feb 24 14:50:08 crc kubenswrapper[4982]: current-context: default-context Feb 24 14:50:08 crc kubenswrapper[4982]: kind: Config Feb 24 14:50:08 crc kubenswrapper[4982]: preferences: {} Feb 24 14:50:08 crc kubenswrapper[4982]: users: Feb 24 14:50:08 crc kubenswrapper[4982]: - name: default-auth Feb 24 14:50:08 crc kubenswrapper[4982]: user: Feb 24 14:50:08 crc kubenswrapper[4982]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 14:50:08 crc kubenswrapper[4982]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 14:50:08 crc kubenswrapper[4982]: EOF Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjhfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.527516 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.527688 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ -f "/env/_master" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: set -o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: source "/env/_master" Feb 24 14:50:08 crc kubenswrapper[4982]: set +o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 14:50:08 crc kubenswrapper[4982]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 14:50:08 crc kubenswrapper[4982]: --disable-webhook \ Feb 24 14:50:08 crc kubenswrapper[4982]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --loglevel="${LOGLEVEL}" Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.535533 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.535530 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 14:50:08 crc kubenswrapper[4982]: W0224 14:50:08.549688 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod465fb356_3c99_4881_81aa_0cad744fd120.slice/crio-dfdfd68599ee231f4d1cc0865d3037d79464d569f96fb12677cb03f8ec78ac35 WatchSource:0}: Error finding container dfdfd68599ee231f4d1cc0865d3037d79464d569f96fb12677cb03f8ec78ac35: Status 404 returned error can't find the container with id dfdfd68599ee231f4d1cc0865d3037d79464d569f96fb12677cb03f8ec78ac35 Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.550136 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 24 14:50:08 crc kubenswrapper[4982]: while [ true ]; Feb 24 14:50:08 crc kubenswrapper[4982]: do Feb 24 14:50:08 crc kubenswrapper[4982]: for f in $(ls /tmp/serviceca); do Feb 24 14:50:08 crc kubenswrapper[4982]: echo $f Feb 24 14:50:08 crc kubenswrapper[4982]: ca_file_path="/tmp/serviceca/${f}" Feb 24 14:50:08 crc kubenswrapper[4982]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 24 14:50:08 crc kubenswrapper[4982]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 24 14:50:08 crc kubenswrapper[4982]: if [ -e "${reg_dir_path}" ]; then Feb 24 14:50:08 crc kubenswrapper[4982]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 24 14:50:08 crc kubenswrapper[4982]: else Feb 24 14:50:08 crc kubenswrapper[4982]: mkdir $reg_dir_path Feb 24 14:50:08 crc kubenswrapper[4982]: cp $ca_file_path $reg_dir_path/ca.crt Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: for d in $(ls /etc/docker/certs.d); do Feb 24 14:50:08 crc kubenswrapper[4982]: echo $d Feb 24 14:50:08 crc kubenswrapper[4982]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 24 14:50:08 crc kubenswrapper[4982]: reg_conf_path="/tmp/serviceca/${dp}" Feb 24 14:50:08 crc kubenswrapper[4982]: if [ ! -e "${reg_conf_path}" ]; then Feb 24 14:50:08 crc kubenswrapper[4982]: rm -rf /etc/docker/certs.d/$d Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: sleep 60 & wait ${!} Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66lkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-ccj66_openshift-image-registry(c8556181-42f0-45af-8922-fd147917bce5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.552254 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-ccj66" podUID="c8556181-42f0-45af-8922-fd147917bce5" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.559986 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 24 14:50:08 crc kubenswrapper[4982]: set -euo pipefail Feb 24 14:50:08 crc kubenswrapper[4982]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 24 14:50:08 crc kubenswrapper[4982]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 24 14:50:08 crc kubenswrapper[4982]: # As the secret mount is optional we must wait for the files to be present. Feb 24 14:50:08 crc kubenswrapper[4982]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 24 14:50:08 crc kubenswrapper[4982]: TS=$(date +%s) Feb 24 14:50:08 crc kubenswrapper[4982]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 24 14:50:08 crc kubenswrapper[4982]: HAS_LOGGED_INFO=0 Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: log_missing_certs(){ Feb 24 14:50:08 crc kubenswrapper[4982]: CUR_TS=$(date +%s) Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 24 14:50:08 crc kubenswrapper[4982]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 24 14:50:08 crc kubenswrapper[4982]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 24 14:50:08 crc kubenswrapper[4982]: HAS_LOGGED_INFO=1 Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: } Feb 24 14:50:08 crc kubenswrapper[4982]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 24 14:50:08 crc kubenswrapper[4982]: log_missing_certs Feb 24 14:50:08 crc kubenswrapper[4982]: sleep 5 Feb 24 14:50:08 crc kubenswrapper[4982]: done Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 24 14:50:08 crc kubenswrapper[4982]: exec /usr/bin/kube-rbac-proxy \ Feb 24 14:50:08 crc kubenswrapper[4982]: --logtostderr \ Feb 24 14:50:08 crc kubenswrapper[4982]: --secure-listen-address=:9108 \ Feb 24 14:50:08 crc kubenswrapper[4982]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 24 14:50:08 crc kubenswrapper[4982]: --upstream=http://127.0.0.1:29108/ \ Feb 24 14:50:08 crc kubenswrapper[4982]: --tls-private-key-file=${TLS_PK} \ Feb 24 14:50:08 crc kubenswrapper[4982]: --tls-cert-file=${TLS_CERT} Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv4kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-z9jtq_openshift-ovn-kubernetes(465fb356-3c99-4881-81aa-0cad744fd120): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.561617 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.562788 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ -f "/env/_master" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: set -o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: source "/env/_master" Feb 24 14:50:08 crc kubenswrapper[4982]: set +o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: ovn_v4_join_subnet_opt= Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "" != "" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: ovn_v6_join_subnet_opt= Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "" != "" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: ovn_v4_transit_switch_subnet_opt= Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "" != "" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: ovn_v6_transit_switch_subnet_opt= Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "" != "" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: dns_name_resolver_enabled_flag= Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "false" == "true" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: persistent_ips_enabled_flag= Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "true" == "true" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: # This is needed so that converting clusters from GA to TP Feb 24 14:50:08 crc kubenswrapper[4982]: # will rollout control plane pods as well Feb 24 14:50:08 crc kubenswrapper[4982]: network_segmentation_enabled_flag= Feb 24 14:50:08 crc kubenswrapper[4982]: multi_network_enabled_flag= Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ "true" == "true" ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: multi_network_enabled_flag="--enable-multi-network" Feb 24 14:50:08 crc kubenswrapper[4982]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: Feb 24 14:50:08 crc kubenswrapper[4982]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 24 14:50:08 crc kubenswrapper[4982]: exec /usr/bin/ovnkube \ Feb 24 14:50:08 crc kubenswrapper[4982]: --enable-interconnect \ Feb 24 14:50:08 crc kubenswrapper[4982]: --init-cluster-manager "${K8S_NODE}" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 24 14:50:08 crc kubenswrapper[4982]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --metrics-bind-address "127.0.0.1:29108" \ Feb 24 14:50:08 crc kubenswrapper[4982]: --metrics-enable-pprof \ Feb 24 14:50:08 crc kubenswrapper[4982]: --metrics-enable-config-duration \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${ovn_v4_join_subnet_opt} \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${ovn_v6_join_subnet_opt} \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${dns_name_resolver_enabled_flag} \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${persistent_ips_enabled_flag} \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${multi_network_enabled_flag} \ Feb 24 14:50:08 crc kubenswrapper[4982]: ${network_segmentation_enabled_flag} Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv4kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-z9jtq_openshift-ovn-kubernetes(465fb356-3c99-4881-81aa-0cad744fd120): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.564233 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" podUID="465fb356-3c99-4881-81aa-0cad744fd120" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.578397 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.586726 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.595237 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.605179 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.619146 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.627056 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.633288 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.644395 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.652641 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.659554 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.668641 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.676875 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.685540 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.694061 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.700523 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.708189 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.715660 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.718190 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.728623 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: W0224 14:50:08.732611 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d32042cb4c3de1a54f2ceb465f4b619d3f9d6bb822ee166fc03b509100262494 WatchSource:0}: Error finding container d32042cb4c3de1a54f2ceb465f4b619d3f9d6bb822ee166fc03b509100262494: Status 404 returned error can't find the container with id d32042cb4c3de1a54f2ceb465f4b619d3f9d6bb822ee166fc03b509100262494 Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.734596 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 14:50:08 crc kubenswrapper[4982]: set -o allexport Feb 24 14:50:08 crc kubenswrapper[4982]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 14:50:08 crc kubenswrapper[4982]: source /etc/kubernetes/apiserver-url.env Feb 24 14:50:08 crc kubenswrapper[4982]: else Feb 24 14:50:08 crc kubenswrapper[4982]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 14:50:08 crc kubenswrapper[4982]: exit 1 Feb 24 14:50:08 crc kubenswrapper[4982]: fi Feb 24 14:50:08 crc kubenswrapper[4982]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 14:50:08 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.735782 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.736917 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.743267 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.743694 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lknrx" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.753147 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jgtdj" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.762773 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhcxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: W0224 14:50:08.766626 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42019c71_4e1e_4a98_aee6_91061deb320a.slice/crio-61f1f930f802ffacbc8e19401530ac4b74e31147b55c9e114e8c81d84b8e9c8e WatchSource:0}: Error finding container 61f1f930f802ffacbc8e19401530ac4b74e31147b55c9e114e8c81d84b8e9c8e: Status 404 returned error can't find the container with id 61f1f930f802ffacbc8e19401530ac4b74e31147b55c9e114e8c81d84b8e9c8e Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.767872 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.774377 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fhm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-lknrx_openshift-multus(42019c71-4e1e-4a98-aee6-91061deb320a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.774626 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhcxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.776987 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.777040 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-lknrx" podUID="42019c71-4e1e-4a98-aee6-91061deb320a" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.783222 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:08 crc kubenswrapper[4982]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 24 14:50:08 crc kubenswrapper[4982]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 24 14:50:08 crc kubenswrapper[4982]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-jgtdj_openshift-multus(86687a8a-6996-44fa-a62e-b43266c31922): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:08 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.785176 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-jgtdj" podUID="86687a8a-6996-44fa-a62e-b43266c31922" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.786224 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.802603 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.827656 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.839348 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.855136 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.855485 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.855546 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:50:09.855485765 +0000 UTC m=+71.474544288 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.855670 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.855732 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.855892 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.855935 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.856026 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:09.856000018 +0000 UTC m=+71.475058591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.856143 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.856222 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.856306 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.855753 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.856423 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:09.856403749 +0000 UTC m=+71.475462352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.856559 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.855893 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.856711 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:09.856694936 +0000 UTC m=+71.475753459 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.856841 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.856956 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:09.856943413 +0000 UTC m=+71.476002136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.859456 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.903467 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.939700 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.958084 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.958279 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: E0224 14:50:08.958492 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs podName:99337e5a-7ecb-4ed1-8ec5-14979be84e68 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:09.958466074 +0000 UTC m=+71.577524597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs") pod "network-metrics-daemon-6gwqq" (UID: "99337e5a-7ecb-4ed1-8ec5-14979be84e68") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:08 crc kubenswrapper[4982]: I0224 14:50:08.980001 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.149663 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.150310 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.151699 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.152379 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.153385 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.153878 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.154519 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.156125 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.156731 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.157122 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.158689 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.160100 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.162756 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.164141 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.165634 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.168494 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.169711 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.170756 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.171773 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.173577 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.175207 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.177876 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.177954 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.182893 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.183896 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.184459 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.185216 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.185692 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.186292 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.187913 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.188364 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.189303 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.189759 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.190609 4982 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.190715 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.192250 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.193179 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.193665 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.194881 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.195074 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.195687 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.196592 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.197208 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.198195 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.198743 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.199668 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.200278 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.201266 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.201739 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.202615 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.203097 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.204129 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.204592 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.205416 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.205875 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.206758 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.207297 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.207835 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.207878 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.217044 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.270465 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.304303 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.342745 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.378316 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.420792 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.462897 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.502107 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.504443 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgtdj" event={"ID":"86687a8a-6996-44fa-a62e-b43266c31922","Type":"ContainerStarted","Data":"7e80511c5cfd27c7f15409785d39197e6fff3f476e9e9eab3682eff11c26ca12"} Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.505399 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerStarted","Data":"61f1f930f802ffacbc8e19401530ac4b74e31147b55c9e114e8c81d84b8e9c8e"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.506660 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fhm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-lknrx_openshift-multus(42019c71-4e1e-4a98-aee6-91061deb320a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.506870 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:09 crc kubenswrapper[4982]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 24 14:50:09 crc kubenswrapper[4982]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 24 14:50:09 crc kubenswrapper[4982]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-jgtdj_openshift-multus(86687a8a-6996-44fa-a62e-b43266c31922): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:09 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.508252 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-lknrx" podUID="42019c71-4e1e-4a98-aee6-91061deb320a" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.509207 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-jgtdj" podUID="86687a8a-6996-44fa-a62e-b43266c31922" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.510415 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"43cd56d121f705aaac652e7f6376bd9f3e37c8d67095b4b939ffb49489b1a502"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.513437 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhcxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.514366 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" event={"ID":"465fb356-3c99-4881-81aa-0cad744fd120","Type":"ContainerStarted","Data":"dfdfd68599ee231f4d1cc0865d3037d79464d569f96fb12677cb03f8ec78ac35"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.516202 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:09 crc kubenswrapper[4982]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Feb 24 14:50:09 crc kubenswrapper[4982]: set -euo pipefail Feb 24 14:50:09 crc kubenswrapper[4982]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 24 14:50:09 crc kubenswrapper[4982]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 24 14:50:09 crc kubenswrapper[4982]: # As the secret mount is optional we must wait for the files to be present. Feb 24 14:50:09 crc kubenswrapper[4982]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 24 14:50:09 crc kubenswrapper[4982]: TS=$(date +%s) Feb 24 14:50:09 crc kubenswrapper[4982]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 24 14:50:09 crc kubenswrapper[4982]: HAS_LOGGED_INFO=0 Feb 24 14:50:09 crc kubenswrapper[4982]: Feb 24 14:50:09 crc kubenswrapper[4982]: log_missing_certs(){ Feb 24 14:50:09 crc kubenswrapper[4982]: CUR_TS=$(date +%s) Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 24 14:50:09 crc kubenswrapper[4982]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 24 14:50:09 crc kubenswrapper[4982]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 24 14:50:09 crc kubenswrapper[4982]: HAS_LOGGED_INFO=1 Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: } Feb 24 14:50:09 crc kubenswrapper[4982]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 24 14:50:09 crc kubenswrapper[4982]: log_missing_certs Feb 24 14:50:09 crc kubenswrapper[4982]: sleep 5 Feb 24 14:50:09 crc kubenswrapper[4982]: done Feb 24 14:50:09 crc kubenswrapper[4982]: Feb 24 14:50:09 crc kubenswrapper[4982]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 24 14:50:09 crc kubenswrapper[4982]: exec /usr/bin/kube-rbac-proxy \ Feb 24 14:50:09 crc kubenswrapper[4982]: --logtostderr \ Feb 24 14:50:09 crc kubenswrapper[4982]: --secure-listen-address=:9108 \ Feb 24 14:50:09 crc kubenswrapper[4982]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 24 14:50:09 crc kubenswrapper[4982]: --upstream=http://127.0.0.1:29108/ \ Feb 24 14:50:09 crc kubenswrapper[4982]: --tls-private-key-file=${TLS_PK} \ Feb 24 14:50:09 crc kubenswrapper[4982]: --tls-cert-file=${TLS_CERT} Feb 24 14:50:09 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv4kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-z9jtq_openshift-ovn-kubernetes(465fb356-3c99-4881-81aa-0cad744fd120): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:09 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.516298 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhcxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.516704 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d32042cb4c3de1a54f2ceb465f4b619d3f9d6bb822ee166fc03b509100262494"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.517423 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.517827 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ccj66" event={"ID":"c8556181-42f0-45af-8922-fd147917bce5","Type":"ContainerStarted","Data":"9e1037575396fd8f0797b323c235ed175cd419d63886d80ea66d7d1aabb7e28c"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.518882 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:09 crc kubenswrapper[4982]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ -f "/env/_master" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: set -o allexport Feb 24 14:50:09 crc kubenswrapper[4982]: source "/env/_master" Feb 24 14:50:09 crc kubenswrapper[4982]: set +o allexport Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: Feb 24 14:50:09 crc kubenswrapper[4982]: ovn_v4_join_subnet_opt= Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ "" != "" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: ovn_v6_join_subnet_opt= Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ "" != "" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: Feb 24 14:50:09 crc kubenswrapper[4982]: ovn_v4_transit_switch_subnet_opt= Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ "" != "" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: ovn_v6_transit_switch_subnet_opt= Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ "" != "" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: Feb 24 14:50:09 crc kubenswrapper[4982]: dns_name_resolver_enabled_flag= Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ "false" == "true" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: Feb 24 14:50:09 crc kubenswrapper[4982]: persistent_ips_enabled_flag= Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ "true" == "true" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: Feb 24 14:50:09 crc kubenswrapper[4982]: # This is needed so that converting clusters from GA to TP Feb 24 14:50:09 crc kubenswrapper[4982]: # will rollout control plane pods as well Feb 24 14:50:09 crc kubenswrapper[4982]: network_segmentation_enabled_flag= Feb 24 14:50:09 crc kubenswrapper[4982]: multi_network_enabled_flag= Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ "true" == "true" ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: multi_network_enabled_flag="--enable-multi-network" Feb 24 14:50:09 crc kubenswrapper[4982]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: Feb 24 14:50:09 crc kubenswrapper[4982]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 24 14:50:09 crc kubenswrapper[4982]: exec /usr/bin/ovnkube \ Feb 24 14:50:09 crc kubenswrapper[4982]: --enable-interconnect \ Feb 24 14:50:09 crc kubenswrapper[4982]: --init-cluster-manager "${K8S_NODE}" \ Feb 24 14:50:09 crc kubenswrapper[4982]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 24 14:50:09 crc kubenswrapper[4982]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 24 14:50:09 crc kubenswrapper[4982]: --metrics-bind-address "127.0.0.1:29108" \ Feb 24 14:50:09 crc kubenswrapper[4982]: --metrics-enable-pprof \ Feb 24 14:50:09 crc kubenswrapper[4982]: --metrics-enable-config-duration \ Feb 24 14:50:09 crc kubenswrapper[4982]: ${ovn_v4_join_subnet_opt} \ Feb 24 14:50:09 crc kubenswrapper[4982]: ${ovn_v6_join_subnet_opt} \ Feb 24 14:50:09 crc kubenswrapper[4982]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 24 14:50:09 crc kubenswrapper[4982]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 24 14:50:09 crc kubenswrapper[4982]: ${dns_name_resolver_enabled_flag} \ Feb 24 14:50:09 crc kubenswrapper[4982]: ${persistent_ips_enabled_flag} \ Feb 24 14:50:09 crc kubenswrapper[4982]: ${multi_network_enabled_flag} \ Feb 24 14:50:09 crc kubenswrapper[4982]: ${network_segmentation_enabled_flag} Feb 24 14:50:09 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv4kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-z9jtq_openshift-ovn-kubernetes(465fb356-3c99-4881-81aa-0cad744fd120): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:09 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.519163 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:09 crc kubenswrapper[4982]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 14:50:09 crc kubenswrapper[4982]: set -o allexport Feb 24 14:50:09 crc kubenswrapper[4982]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 14:50:09 crc kubenswrapper[4982]: source /etc/kubernetes/apiserver-url.env Feb 24 14:50:09 crc kubenswrapper[4982]: else Feb 24 14:50:09 crc kubenswrapper[4982]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 14:50:09 crc kubenswrapper[4982]: exit 1 Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 14:50:09 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:09 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.519346 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:09 crc kubenswrapper[4982]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 24 14:50:09 crc kubenswrapper[4982]: while [ true ]; Feb 24 14:50:09 crc kubenswrapper[4982]: do Feb 24 14:50:09 crc kubenswrapper[4982]: for f in $(ls /tmp/serviceca); do Feb 24 14:50:09 crc kubenswrapper[4982]: echo $f Feb 24 14:50:09 crc kubenswrapper[4982]: ca_file_path="/tmp/serviceca/${f}" Feb 24 14:50:09 crc kubenswrapper[4982]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 24 14:50:09 crc kubenswrapper[4982]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 24 14:50:09 crc kubenswrapper[4982]: if [ -e "${reg_dir_path}" ]; then Feb 24 14:50:09 crc kubenswrapper[4982]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 24 14:50:09 crc kubenswrapper[4982]: else Feb 24 14:50:09 crc kubenswrapper[4982]: mkdir $reg_dir_path Feb 24 14:50:09 crc kubenswrapper[4982]: cp $ca_file_path $reg_dir_path/ca.crt Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: done Feb 24 14:50:09 crc kubenswrapper[4982]: for d in $(ls /etc/docker/certs.d); do Feb 24 14:50:09 crc kubenswrapper[4982]: echo $d Feb 24 14:50:09 crc kubenswrapper[4982]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 24 14:50:09 crc kubenswrapper[4982]: reg_conf_path="/tmp/serviceca/${dp}" Feb 24 14:50:09 crc kubenswrapper[4982]: if [ ! -e "${reg_conf_path}" ]; then Feb 24 14:50:09 crc kubenswrapper[4982]: rm -rf /etc/docker/certs.d/$d Feb 24 14:50:09 crc kubenswrapper[4982]: fi Feb 24 14:50:09 crc kubenswrapper[4982]: done Feb 24 14:50:09 crc kubenswrapper[4982]: sleep 60 & wait ${!} Feb 24 14:50:09 crc kubenswrapper[4982]: done Feb 24 14:50:09 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66lkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-ccj66_openshift-image-registry(c8556181-42f0-45af-8922-fd147917bce5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:09 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.520109 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"68f0edfaec0a349fa2bca761d212b8897185bcd1ed3f8156ce092422d8f7c3b0"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.520106 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" podUID="465fb356-3c99-4881-81aa-0cad744fd120" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.520254 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.520831 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-ccj66" podUID="c8556181-42f0-45af-8922-fd147917bce5" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.523035 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:50:09 crc kubenswrapper[4982]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 24 14:50:09 crc kubenswrapper[4982]: apiVersion: v1 Feb 24 14:50:09 crc kubenswrapper[4982]: clusters: Feb 24 14:50:09 crc kubenswrapper[4982]: - cluster: Feb 24 14:50:09 crc kubenswrapper[4982]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 24 14:50:09 crc kubenswrapper[4982]: server: https://api-int.crc.testing:6443 Feb 24 14:50:09 crc kubenswrapper[4982]: name: default-cluster Feb 24 14:50:09 crc kubenswrapper[4982]: contexts: Feb 24 14:50:09 crc kubenswrapper[4982]: - context: Feb 24 14:50:09 crc kubenswrapper[4982]: cluster: default-cluster Feb 24 14:50:09 crc kubenswrapper[4982]: namespace: default Feb 24 14:50:09 crc kubenswrapper[4982]: user: default-auth Feb 24 14:50:09 crc kubenswrapper[4982]: name: default-context Feb 24 14:50:09 crc kubenswrapper[4982]: current-context: default-context Feb 24 14:50:09 crc kubenswrapper[4982]: kind: Config Feb 24 14:50:09 crc kubenswrapper[4982]: preferences: {} Feb 24 14:50:09 crc kubenswrapper[4982]: users: Feb 24 14:50:09 crc kubenswrapper[4982]: - name: default-auth Feb 24 14:50:09 crc kubenswrapper[4982]: user: Feb 24 14:50:09 crc kubenswrapper[4982]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 14:50:09 crc kubenswrapper[4982]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 24 14:50:09 crc kubenswrapper[4982]: EOF Feb 24 14:50:09 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjhfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 14:50:09 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.524194 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.543838 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.578200 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.585117 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.586766 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.586805 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.586817 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.586957 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.641040 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.654675 4982 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.654993 4982 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.656429 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.656589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.656625 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.656654 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.656674 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:09Z","lastTransitionTime":"2026-02-24T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.675364 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.679047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.679103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.679120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.679141 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.679155 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:09Z","lastTransitionTime":"2026-02-24T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.690092 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.694940 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.695288 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.695550 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.695773 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.695976 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:09Z","lastTransitionTime":"2026-02-24T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.707117 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.707144 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.711556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.711595 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.711606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.711620 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.711629 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:09Z","lastTransitionTime":"2026-02-24T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.727762 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.731844 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.731881 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.731890 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.731908 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.731917 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:09Z","lastTransitionTime":"2026-02-24T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.737207 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.745529 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.745660 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.747322 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.747351 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.747359 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.747373 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.747383 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:09Z","lastTransitionTime":"2026-02-24T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.779707 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.820450 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.850292 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.850355 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.850378 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.850403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.850423 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:09Z","lastTransitionTime":"2026-02-24T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.861558 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.867722 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.867853 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.867893 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:50:11.867858073 +0000 UTC m=+73.486916606 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.867953 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.867992 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868007 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:11.867993436 +0000 UTC m=+73.487051929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.868163 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868176 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.868221 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868269 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:11.868247103 +0000 UTC m=+73.487305596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868368 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868400 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868413 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868448 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868469 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:11.868451098 +0000 UTC m=+73.487509591 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868475 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868494 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.868594 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:11.868578531 +0000 UTC m=+73.487637064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.898463 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.940564 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.954080 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.954119 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.954129 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.954145 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.954154 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:09Z","lastTransitionTime":"2026-02-24T14:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.968844 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.968993 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: E0224 14:50:09.969054 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs podName:99337e5a-7ecb-4ed1-8ec5-14979be84e68 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:11.969035785 +0000 UTC m=+73.588094288 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs") pod "network-metrics-daemon-6gwqq" (UID: "99337e5a-7ecb-4ed1-8ec5-14979be84e68") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:09 crc kubenswrapper[4982]: I0224 14:50:09.978557 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.021780 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.057560 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.057639 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.057657 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.057684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.057704 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.060611 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.098231 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.143521 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.144689 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.144752 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.144773 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:10 crc kubenswrapper[4982]: E0224 14:50:10.144836 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:10 crc kubenswrapper[4982]: E0224 14:50:10.144988 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.145016 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:10 crc kubenswrapper[4982]: E0224 14:50:10.145142 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:10 crc kubenswrapper[4982]: E0224 14:50:10.145262 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.159875 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.159934 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.159953 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.159976 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.160001 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.177640 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.221586 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.262914 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.262990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.263016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.263047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.263069 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.366358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.366426 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.366450 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.366479 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.366551 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.470298 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.470377 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.470403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.470434 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.470453 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.572913 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.572974 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.572997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.573027 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.573050 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.676044 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.676104 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.676123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.676152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.676176 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.778740 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.778803 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.778825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.778905 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.778934 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.881990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.882032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.882042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.882058 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.882068 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.984835 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.984894 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.984904 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.984921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:10 crc kubenswrapper[4982]: I0224 14:50:10.984932 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:10Z","lastTransitionTime":"2026-02-24T14:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.087152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.087204 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.087224 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.087247 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.087265 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.189127 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.189164 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.189177 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.189192 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.189203 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.291167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.291210 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.291227 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.291251 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.291268 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.393545 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.393569 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.393577 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.393589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.393597 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.496873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.496934 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.496949 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.496973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.496993 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.599355 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.599405 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.599422 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.599445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.599462 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.702040 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.702118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.702140 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.702171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.702191 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.805610 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.805677 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.805696 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.805727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.805748 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.887804 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.887955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.887989 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.888020 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.888049 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888122 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:50:15.888085159 +0000 UTC m=+77.507143702 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888164 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888178 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888189 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888224 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:15.888216423 +0000 UTC m=+77.507274916 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888259 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888321 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888344 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888262 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888400 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888441 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:15.888413608 +0000 UTC m=+77.507472141 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888756 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:15.888726516 +0000 UTC m=+77.507785009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.888780 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:15.888773107 +0000 UTC m=+77.507831600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.908817 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.908871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.908887 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.908914 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.908929 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:11Z","lastTransitionTime":"2026-02-24T14:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:11 crc kubenswrapper[4982]: I0224 14:50:11.989034 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.989278 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:11 crc kubenswrapper[4982]: E0224 14:50:11.989403 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs podName:99337e5a-7ecb-4ed1-8ec5-14979be84e68 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:15.989378105 +0000 UTC m=+77.608436818 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs") pod "network-metrics-daemon-6gwqq" (UID: "99337e5a-7ecb-4ed1-8ec5-14979be84e68") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.011402 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.011466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.011485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.011536 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.011557 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.120329 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.120425 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.121472 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.121665 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.121693 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.144538 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.144578 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.144630 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:12 crc kubenswrapper[4982]: E0224 14:50:12.144737 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.144768 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:12 crc kubenswrapper[4982]: E0224 14:50:12.144855 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:12 crc kubenswrapper[4982]: E0224 14:50:12.144955 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:12 crc kubenswrapper[4982]: E0224 14:50:12.145022 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.225023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.225094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.225109 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.225136 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.225153 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.328020 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.328066 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.328079 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.328097 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.328110 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.430920 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.430982 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.431000 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.431024 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.431041 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.533962 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.534024 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.534041 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.534064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.534083 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.636855 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.636909 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.636927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.636948 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.636965 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.739911 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.739973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.739986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.740013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.740030 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.843438 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.843537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.843556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.843589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.843608 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.946604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.946680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.946705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.946741 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:12 crc kubenswrapper[4982]: I0224 14:50:12.946765 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:12Z","lastTransitionTime":"2026-02-24T14:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.049441 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.049494 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.049534 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.049554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.049568 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.151828 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.152056 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.152251 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.152413 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.152568 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.255371 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.255436 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.255455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.255491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.255540 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.359692 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.359749 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.359769 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.359796 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.359816 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.462734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.462793 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.462810 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.462836 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.462856 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.565565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.565615 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.565633 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.565658 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.565677 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.668209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.668262 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.668284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.668306 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.668322 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.770939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.771006 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.771024 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.771049 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.771066 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.874095 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.874150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.874168 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.874196 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.874220 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.976924 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.976993 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.977010 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.977036 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:13 crc kubenswrapper[4982]: I0224 14:50:13.977055 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:13Z","lastTransitionTime":"2026-02-24T14:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.079822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.079884 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.079898 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.079919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.079933 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.145084 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.145186 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.145184 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:14 crc kubenswrapper[4982]: E0224 14:50:14.145289 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:14 crc kubenswrapper[4982]: E0224 14:50:14.145433 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.145451 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:14 crc kubenswrapper[4982]: E0224 14:50:14.145621 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:14 crc kubenswrapper[4982]: E0224 14:50:14.145731 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.183170 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.183234 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.183253 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.183277 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.183295 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.286398 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.286453 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.286473 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.286554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.286582 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.389939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.390020 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.390044 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.390071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.390090 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.493665 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.493733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.493749 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.493774 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.493797 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.596367 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.596440 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.596458 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.596487 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.596540 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.699793 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.699842 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.699862 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.699889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.699908 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.803173 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.803241 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.803259 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.803296 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.803317 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.906843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.906947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.906966 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.906990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:14 crc kubenswrapper[4982]: I0224 14:50:14.907011 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:14Z","lastTransitionTime":"2026-02-24T14:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.010058 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.010103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.010114 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.010132 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.010146 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.112523 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.112573 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.112584 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.112600 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.112613 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.216181 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.216240 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.216258 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.216285 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.216302 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.320016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.320101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.320120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.320149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.320172 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.423657 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.424054 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.424209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.424381 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.424565 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.528537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.528608 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.528625 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.528651 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.528674 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.631050 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.631110 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.631119 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.631132 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.631141 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.733735 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.733795 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.733813 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.733838 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.733859 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.837657 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.837737 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.837765 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.837799 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.837818 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.941929 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.941994 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.942011 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.942039 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.942057 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:15Z","lastTransitionTime":"2026-02-24T14:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.944887 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.945024 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.945079 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.945136 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:15 crc kubenswrapper[4982]: I0224 14:50:15.945188 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.945348 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.945386 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.945442 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.945563 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:23.945532737 +0000 UTC m=+85.564591270 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946170 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:50:23.946153324 +0000 UTC m=+85.565211847 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946279 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946302 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946320 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946363 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:23.946350269 +0000 UTC m=+85.565408792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946417 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946453 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:23.946441932 +0000 UTC m=+85.565500465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946559 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:15 crc kubenswrapper[4982]: E0224 14:50:15.946604 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:23.946591955 +0000 UTC m=+85.565650478 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.044757 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.044829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.044852 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.044882 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.044903 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.045968 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:16 crc kubenswrapper[4982]: E0224 14:50:16.046285 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:16 crc kubenswrapper[4982]: E0224 14:50:16.046427 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs podName:99337e5a-7ecb-4ed1-8ec5-14979be84e68 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:24.046384961 +0000 UTC m=+85.665443504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs") pod "network-metrics-daemon-6gwqq" (UID: "99337e5a-7ecb-4ed1-8ec5-14979be84e68") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.145447 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.145794 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:16 crc kubenswrapper[4982]: E0224 14:50:16.145797 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.145958 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:16 crc kubenswrapper[4982]: E0224 14:50:16.146166 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.146260 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:16 crc kubenswrapper[4982]: E0224 14:50:16.146364 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:16 crc kubenswrapper[4982]: E0224 14:50:16.146451 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.148050 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.148098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.148122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.148152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.148174 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.251563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.251633 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.251651 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.251676 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.251695 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.355209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.355635 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.355796 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.355994 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.356124 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.459537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.459606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.459631 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.459698 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.459722 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.563763 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.563822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.563840 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.563866 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.563886 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.667827 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.667884 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.667901 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.667927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.667945 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.771316 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.771361 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.771375 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.771394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.771406 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.873765 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.873844 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.873859 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.873876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.873888 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.976704 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.976748 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.976777 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.976797 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:16 crc kubenswrapper[4982]: I0224 14:50:16.976807 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:16Z","lastTransitionTime":"2026-02-24T14:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.079105 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.079159 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.079175 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.079198 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.079215 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.183058 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.183124 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.183141 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.183165 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.183182 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.285649 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.285770 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.285792 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.285814 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.285832 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.389157 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.389218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.389272 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.389304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.389323 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.492358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.492406 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.492423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.492445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.492462 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.595888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.595927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.595938 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.595953 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.595965 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.699308 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.699357 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.699370 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.699388 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.699400 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.802212 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.802268 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.802285 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.802308 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.802325 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.905271 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.905341 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.905363 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.905391 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:17 crc kubenswrapper[4982]: I0224 14:50:17.905414 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:17Z","lastTransitionTime":"2026-02-24T14:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.008923 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.008983 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.009055 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.009082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.009099 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.039703 4982 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.112185 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.112236 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.112248 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.112270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.112284 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.145202 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.145288 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:18 crc kubenswrapper[4982]: E0224 14:50:18.145397 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.145420 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.145562 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:18 crc kubenswrapper[4982]: E0224 14:50:18.145634 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:18 crc kubenswrapper[4982]: E0224 14:50:18.145770 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:18 crc kubenswrapper[4982]: E0224 14:50:18.145856 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.215156 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.215240 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.215270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.215304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.215331 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.318801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.318854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.318884 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.318912 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.318932 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.422592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.422654 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.422670 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.422694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.422712 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.525733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.525816 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.525831 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.525857 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.525877 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.630700 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.630780 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.630803 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.630830 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.630849 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.734454 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.734570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.734605 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.734643 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.734669 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.838423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.838587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.838612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.838635 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.838652 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.941968 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.942060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.942085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.942126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:18 crc kubenswrapper[4982]: I0224 14:50:18.942154 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:18Z","lastTransitionTime":"2026-02-24T14:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.045088 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.045140 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.045159 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.045184 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.045201 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.149666 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.149746 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.149767 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.149790 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.149808 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.166480 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.181727 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.195275 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.207254 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.221325 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.234486 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.251561 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.254790 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.254883 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.254909 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.254945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.254967 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.266981 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.283375 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.302789 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.319526 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.333994 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.350227 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.358697 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.358748 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.358772 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.358802 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.358821 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.385103 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.398671 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.462770 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.462833 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.462851 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.462876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.462894 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.566092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.566174 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.566201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.566239 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.566261 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.669290 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.669352 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.669369 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.669394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.669411 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.772298 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.772351 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.772366 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.772383 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.772395 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.876437 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.876583 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.876609 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.876637 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.876655 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.979759 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.979824 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.979840 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.979865 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:19 crc kubenswrapper[4982]: I0224 14:50:19.979883 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:19Z","lastTransitionTime":"2026-02-24T14:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.035009 4982 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.082905 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.082969 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.082986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.083016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.083037 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.085071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.085166 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.085225 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.085248 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.085266 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.104614 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.109707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.109752 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.109767 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.109788 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.109805 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.125731 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.130735 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.130805 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.130831 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.130861 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.130881 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.145568 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.145782 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.145796 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.145920 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.146061 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.146149 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.146300 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.146687 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.147933 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.160692 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.160791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.160813 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.160861 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.160883 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.178724 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.184582 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.184623 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.184640 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.184665 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.184683 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.200303 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: E0224 14:50:20.200565 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.203640 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.203730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.203752 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.203774 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.203822 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.306913 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.306979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.306998 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.307023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.307041 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.409933 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.409994 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.410015 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.410041 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.410060 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.513082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.513130 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.513147 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.513171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.513188 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.556253 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hg2sm" event={"ID":"5d426fc2-19af-43bc-a39c-c63afb2d9909","Type":"ContainerStarted","Data":"fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.573020 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.584424 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.604194 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.616465 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.616545 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.616563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.616587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.616606 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.619561 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.634685 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.644404 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.663679 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.678353 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.689981 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.704479 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.718997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.719073 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.719100 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.719131 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.719193 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.719747 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.731890 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.747332 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.771881 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.782935 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.821948 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.822009 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.822026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.822052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.822070 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.924985 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.925049 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.925074 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.925102 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:20 crc kubenswrapper[4982]: I0224 14:50:20.925123 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:20Z","lastTransitionTime":"2026-02-24T14:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.027760 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.027825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.027850 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.027876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.027899 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.131001 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.131055 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.131071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.131094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.131116 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.236017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.236441 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.236460 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.236487 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.236525 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.340075 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.340121 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.340132 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.340334 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.340347 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.448077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.448119 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.448131 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.448149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.448161 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.551839 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.551889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.551905 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.551928 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.551945 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.560902 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgtdj" event={"ID":"86687a8a-6996-44fa-a62e-b43266c31922","Type":"ContainerStarted","Data":"daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.562803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.562858 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.565103 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" event={"ID":"465fb356-3c99-4881-81aa-0cad744fd120","Type":"ContainerStarted","Data":"530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.567305 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ccj66" event={"ID":"c8556181-42f0-45af-8922-fd147917bce5","Type":"ContainerStarted","Data":"9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.570126 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.570181 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.575112 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.593721 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.614932 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.632838 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.654292 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.654335 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.654346 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.654364 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.654376 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.655914 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.669239 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.686648 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.707218 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.723115 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.734193 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.757704 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.758076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.758208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.758338 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.758445 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.761542 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.783581 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.799905 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.813757 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.827110 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.840877 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.852761 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.860976 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.861149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.861244 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.861337 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.861421 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.869391 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.879481 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.887944 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.895806 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.914702 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.934623 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:21Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.950550 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:21Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.964895 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.965200 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.965307 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.965395 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.965480 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:21Z","lastTransitionTime":"2026-02-24T14:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.968179 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:21Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:21 crc kubenswrapper[4982]: I0224 14:50:21.993535 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:21Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.012299 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.030041 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.060490 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.068909 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.068979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.068996 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.069021 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.069039 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.086774 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.145324 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:22 crc kubenswrapper[4982]: E0224 14:50:22.145547 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.145576 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.145612 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.145663 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:22 crc kubenswrapper[4982]: E0224 14:50:22.145762 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:22 crc kubenswrapper[4982]: E0224 14:50:22.145901 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:22 crc kubenswrapper[4982]: E0224 14:50:22.146071 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.171796 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.171849 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.171865 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.171886 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.171901 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.275133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.275223 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.275247 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.275279 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.275301 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.377815 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.377880 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.377894 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.377917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.377935 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.481350 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.481811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.481826 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.481851 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.481869 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.577961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" event={"ID":"465fb356-3c99-4881-81aa-0cad744fd120","Type":"ContainerStarted","Data":"70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.585965 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.585997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.586007 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.586023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.586037 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.597188 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.615153 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.673766 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.689753 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.689841 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.690055 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.690081 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.690098 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.690434 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.710726 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.728103 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.747855 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.770585 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.793857 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.793912 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.793935 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.793963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.793988 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.820862 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.850529 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.871017 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.885342 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.897697 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.897751 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.897765 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.897788 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.897802 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:22Z","lastTransitionTime":"2026-02-24T14:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.910088 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.938673 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:22 crc kubenswrapper[4982]: I0224 14:50:22.953395 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.002167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.002211 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.002223 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.002245 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.002258 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.104898 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.105394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.105417 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.105443 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.105464 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.208019 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.208070 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.208083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.208100 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.208113 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.312401 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.312469 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.312538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.312575 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.312599 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.416371 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.416423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.416441 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.416468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.416486 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.519561 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.519622 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.519645 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.519679 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.519704 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.583804 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.609989 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.625228 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.625292 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.625311 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.625337 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.625357 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.652276 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.670803 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.693220 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.713558 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.728349 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.728405 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.728425 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.728455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.728475 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.736120 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.751841 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.768211 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.784536 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.807742 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.807974 4982 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.830778 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.831791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.831871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.831891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.831923 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.831948 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.851023 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.876662 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.893462 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.911098 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:23Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.934605 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.934671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.934689 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.934714 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.934732 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:23Z","lastTransitionTime":"2026-02-24T14:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.947626 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.947772 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.947833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.947886 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.947930 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:50:39.947881495 +0000 UTC m=+101.566939998 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.947984 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948023 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948049 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948063 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948073 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:39.948048759 +0000 UTC m=+101.567107292 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948106 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:39.94809433 +0000 UTC m=+101.567152833 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:23 crc kubenswrapper[4982]: I0224 14:50:23.948109 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948138 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948174 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948224 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948225 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948283 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:39.948268135 +0000 UTC m=+101.567326668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:23 crc kubenswrapper[4982]: E0224 14:50:23.948345 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:39.948294345 +0000 UTC m=+101.567352868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.037595 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.037646 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.037660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.037680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.037694 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.049372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:24 crc kubenswrapper[4982]: E0224 14:50:24.049634 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:24 crc kubenswrapper[4982]: E0224 14:50:24.049728 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs podName:99337e5a-7ecb-4ed1-8ec5-14979be84e68 nodeName:}" failed. No retries permitted until 2026-02-24 14:50:40.049709343 +0000 UTC m=+101.668767846 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs") pod "network-metrics-daemon-6gwqq" (UID: "99337e5a-7ecb-4ed1-8ec5-14979be84e68") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.141245 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.141293 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.141303 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.141318 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.141328 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.144526 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.144581 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.144662 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.144963 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:24 crc kubenswrapper[4982]: E0224 14:50:24.145454 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:24 crc kubenswrapper[4982]: E0224 14:50:24.145718 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:24 crc kubenswrapper[4982]: E0224 14:50:24.145776 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:24 crc kubenswrapper[4982]: E0224 14:50:24.145843 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.244907 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.245662 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.245693 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.245733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.245762 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.349820 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.349867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.349878 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.349899 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.349912 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.453778 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.453868 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.453888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.453915 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.453933 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.557145 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.557231 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.557256 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.557291 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.557352 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.590567 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerStarted","Data":"c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.592959 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0" exitCode=0 Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.593053 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.599380 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.608194 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.629611 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.645342 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.661152 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.663860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.663889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.663900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.663917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.663931 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.681814 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.709627 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.736119 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.752120 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.767159 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.767204 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.767218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.767252 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.767270 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.769493 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.788141 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.812265 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.836570 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.857011 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.870625 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.870699 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.870719 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.870746 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.870766 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.882368 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.902003 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.941132 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.966132 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.972729 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.972791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.972813 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.972839 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.972857 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:24Z","lastTransitionTime":"2026-02-24T14:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:24 crc kubenswrapper[4982]: I0224 14:50:24.986625 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:24Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.006790 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.025136 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.042892 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.072803 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.075557 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.075604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.075619 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.075639 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.075667 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.086720 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.103116 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.122287 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.136868 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.154945 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.170323 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.178119 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.178159 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.178171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.178191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.178204 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.197394 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.208461 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.214450 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.242908 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.258000 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.274519 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.279880 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.279946 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.279963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.279983 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.279999 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.286532 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.306068 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.320802 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.336993 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.356395 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.372005 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.384489 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.384592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.384612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.384649 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.384668 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.400063 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.422126 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.439555 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.460193 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.483980 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.486677 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.486721 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.486736 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.486754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.486770 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.504087 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.590035 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.590117 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.590140 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.590175 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.590196 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.605638 4982 generic.go:334] "Generic (PLEG): container finished" podID="42019c71-4e1e-4a98-aee6-91061deb320a" containerID="c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e" exitCode=0 Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.606241 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerDied","Data":"c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.616117 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.616183 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.616202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.616219 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.616237 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.616254 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.629948 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.649224 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.670387 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.693885 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.693930 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.693942 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.693958 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.693970 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.695333 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.711088 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.725683 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.739138 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.751543 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.765880 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.789088 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.796428 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.796485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.796529 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.796557 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.796575 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.806193 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.828326 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.845318 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.858843 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.872471 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:25Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.899903 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.899969 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.899988 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.900017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:25 crc kubenswrapper[4982]: I0224 14:50:25.900036 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:25Z","lastTransitionTime":"2026-02-24T14:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.003712 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.003779 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.003797 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.003826 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.003844 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.106888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.106946 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.106962 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.106989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.107006 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.145260 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.145304 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.145335 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:26 crc kubenswrapper[4982]: E0224 14:50:26.145428 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.145465 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:26 crc kubenswrapper[4982]: E0224 14:50:26.145702 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:26 crc kubenswrapper[4982]: E0224 14:50:26.145864 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:26 crc kubenswrapper[4982]: E0224 14:50:26.145991 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.210635 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.210733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.210761 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.210800 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.210823 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.314847 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.314909 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.314925 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.314948 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.314966 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.418900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.418978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.418996 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.419028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.419052 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.522475 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.522624 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.522644 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.522677 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.522702 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.623833 4982 generic.go:334] "Generic (PLEG): container finished" podID="42019c71-4e1e-4a98-aee6-91061deb320a" containerID="21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562" exitCode=0 Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.623901 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerDied","Data":"21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.626575 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.626689 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.626714 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.626793 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.626818 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.646291 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.662180 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.678324 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.699114 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.730876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.730932 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.730952 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.730982 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.731004 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.752090 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.776556 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.806685 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.820264 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.832943 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.832987 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.833003 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.833030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.833052 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.835769 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.863565 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.873721 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.885060 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.903547 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.920131 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.930276 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:26Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.936341 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.936378 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.936390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.936410 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:26 crc kubenswrapper[4982]: I0224 14:50:26.936422 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:26Z","lastTransitionTime":"2026-02-24T14:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.039727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.039801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.039822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.039853 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.039875 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.145792 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.145843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.145862 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.145886 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.145903 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.248460 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.248577 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.248603 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.248636 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.248663 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.351989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.352126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.352148 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.352178 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.352201 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.456827 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.457006 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.457036 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.457132 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.457204 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.560402 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.560483 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.560538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.560577 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.560600 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.636422 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.640573 4982 generic.go:334] "Generic (PLEG): container finished" podID="42019c71-4e1e-4a98-aee6-91061deb320a" containerID="8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28" exitCode=0 Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.640652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerDied","Data":"8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.665101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.665163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.665181 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.665204 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.665222 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.670012 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.697574 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.718752 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.737176 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.770138 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.770184 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.770201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.770226 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.770243 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.771489 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.789229 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.811003 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.834055 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.853676 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.873010 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.873078 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.873096 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.873126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.873147 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.874372 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.898001 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.921474 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.941819 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.976324 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.976845 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.976864 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.976423 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.976888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.977092 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:27Z","lastTransitionTime":"2026-02-24T14:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:27 crc kubenswrapper[4982]: I0224 14:50:27.992601 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:27Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.080322 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.080377 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.080390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.080407 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.080420 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.145331 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.145419 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.145424 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.145354 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:28 crc kubenswrapper[4982]: E0224 14:50:28.145595 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:28 crc kubenswrapper[4982]: E0224 14:50:28.145725 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:28 crc kubenswrapper[4982]: E0224 14:50:28.145807 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:28 crc kubenswrapper[4982]: E0224 14:50:28.145919 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.183550 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.183625 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.183644 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.183674 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.183692 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.286949 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.287029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.287048 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.287076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.287097 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.390264 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.390333 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.390357 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.390385 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.390406 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.493371 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.493436 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.493453 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.493477 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.493495 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.596790 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.596854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.596892 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.596920 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.596937 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.649884 4982 generic.go:334] "Generic (PLEG): container finished" podID="42019c71-4e1e-4a98-aee6-91061deb320a" containerID="9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482" exitCode=0 Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.649958 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerDied","Data":"9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.673150 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.701411 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.701468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.701485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.701539 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.701607 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.714824 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.734050 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.758827 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.779066 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.798663 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.804382 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.804445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.804466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.804529 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.804553 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.814140 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.835013 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.854867 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.875246 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.899484 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.907745 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.907798 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.907815 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.907837 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.907852 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:28Z","lastTransitionTime":"2026-02-24T14:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.919285 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.944812 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.967312 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:28 crc kubenswrapper[4982]: I0224 14:50:28.986595 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:28Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.010945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.011021 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.011046 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.011078 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.011100 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.114116 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.114180 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.114196 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.114218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.114235 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.161642 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.163313 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.185411 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.207479 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.216850 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.216891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.216909 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.216933 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.216950 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.226465 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.242382 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.261175 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.281791 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.298880 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.320390 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.324258 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.324320 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.324339 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.324366 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.324386 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.344357 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.359747 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.376760 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.391301 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.414237 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.427111 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.427145 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.427154 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.427134 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.427167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.427198 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.529170 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.529206 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.529218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.529233 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.529243 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.632438 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.632476 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.632487 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.632518 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.632529 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.657457 4982 generic.go:334] "Generic (PLEG): container finished" podID="42019c71-4e1e-4a98-aee6-91061deb320a" containerID="69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4" exitCode=0 Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.657541 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerDied","Data":"69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.680714 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.701906 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.732780 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.737339 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.737540 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.737569 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.737602 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.737625 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.754935 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.770884 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.790569 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.809998 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.826188 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.842042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.842074 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.842086 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.842102 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.842114 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.844779 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.867256 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.882963 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.895938 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.916956 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.932003 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.945083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.945121 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.945133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.945150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.945161 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:29Z","lastTransitionTime":"2026-02-24T14:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.957037 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:29 crc kubenswrapper[4982]: I0224 14:50:29.972612 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.048631 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.048677 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.048691 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.048709 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.048721 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.145408 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.145464 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.145525 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.145543 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.145623 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.145729 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.145871 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.146035 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.151188 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.151246 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.151264 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.153660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.153718 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.256786 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.256869 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.256896 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.256929 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.256954 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.360072 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.360139 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.360160 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.360192 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.360215 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.446980 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.447435 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.447459 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.447490 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.447560 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.469927 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.476115 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.476186 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.476296 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.476371 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.476410 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.497197 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.502761 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.502839 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.502864 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.502898 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.502921 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.528454 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.534974 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.535039 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.535063 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.535098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.535116 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.552270 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.556059 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.556119 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.556134 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.556156 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.556172 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.574191 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: E0224 14:50:30.574418 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.576217 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.576257 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.576273 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.576294 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.576309 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.669341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.675487 4982 generic.go:334] "Generic (PLEG): container finished" podID="42019c71-4e1e-4a98-aee6-91061deb320a" containerID="e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29" exitCode=0 Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.675558 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerDied","Data":"e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.683901 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.683970 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.683991 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.684019 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.684047 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.709336 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.735662 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.757800 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.780194 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.786723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.786783 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.786801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.786825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.786843 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.800698 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.818046 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.835054 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.851213 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.871984 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.885570 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.890083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.890113 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.890122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.890138 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.890149 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.905313 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.923018 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.944438 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.964895 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.976424 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.989225 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.993011 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.993060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.993077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.993099 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:30 crc kubenswrapper[4982]: I0224 14:50:30.993116 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:30Z","lastTransitionTime":"2026-02-24T14:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.001363 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:30Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.015323 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.030705 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.042643 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.059040 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.074801 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.089215 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.095839 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.095955 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.095980 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.096011 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.096031 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.112295 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.126384 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.159173 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.176245 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.192312 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.198745 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.198809 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.198831 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.198860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.198885 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.209275 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.232385 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.252818 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.271076 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.301772 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.301842 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.301863 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.301886 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.301903 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.404278 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.404309 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.404319 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.404332 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.404341 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.507656 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.507706 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.507723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.507744 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.507762 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.610101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.610146 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.610158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.610176 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.610188 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.683205 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" event={"ID":"42019c71-4e1e-4a98-aee6-91061deb320a","Type":"ContainerStarted","Data":"e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.683714 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.683781 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.683796 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.702266 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.708790 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.710662 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.711930 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.711973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.711985 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.712005 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.712016 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.714305 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.724580 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.739153 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.755371 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.768454 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.782195 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.800964 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.814523 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.814558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.814570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.814586 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.814598 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.816016 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.826259 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.837238 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.852784 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.865744 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.881293 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.901993 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.917463 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.917528 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.917540 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.917558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.917571 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:31Z","lastTransitionTime":"2026-02-24T14:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.925894 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.941154 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.963215 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.980690 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:31 crc kubenswrapper[4982]: I0224 14:50:31.998788 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:31Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.015199 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.020185 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.020236 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.020254 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.020541 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.020565 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.031128 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.052699 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.075869 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.093033 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.113147 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.123114 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.123193 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.123213 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.123619 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.123846 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.141851 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.144984 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:32 crc kubenswrapper[4982]: E0224 14:50:32.145293 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.145103 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:32 crc kubenswrapper[4982]: E0224 14:50:32.145628 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.145018 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:32 crc kubenswrapper[4982]: E0224 14:50:32.145908 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.145926 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:32 crc kubenswrapper[4982]: E0224 14:50:32.146193 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.162550 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.188231 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.201095 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.214743 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.226933 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.226983 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.227002 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.227026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.227046 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.230201 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.329313 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.329379 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.329397 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.329421 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.329439 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.432448 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.432652 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.432895 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.433124 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.433343 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.539748 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.539794 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.539812 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.539834 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.539854 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.642785 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.642838 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.642854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.642877 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.642894 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.746002 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.746070 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.746101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.746131 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.746153 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.849318 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.849377 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.849403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.849435 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.849459 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.952413 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.952477 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.952496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.952571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:32 crc kubenswrapper[4982]: I0224 14:50:32.952591 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:32Z","lastTransitionTime":"2026-02-24T14:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.055362 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.055433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.055451 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.055475 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.055493 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.159645 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.159733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.159758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.160244 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.160581 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.263640 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.263704 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.263725 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.263754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.263775 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.366380 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.366448 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.366472 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.366535 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.366562 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.469959 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.470006 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.470022 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.470046 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.470064 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.572983 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.573049 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.573066 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.573089 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.573105 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.675645 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.675699 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.675715 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.675739 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.675757 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.699847 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/0.log" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.706000 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f" exitCode=1 Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.706061 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.707201 4982 scope.go:117] "RemoveContainer" containerID="2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.726624 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.752619 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.774028 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.777917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.777963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.777980 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.778003 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.778022 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.794440 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.809354 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.828797 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.846284 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.863177 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.881198 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.881719 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.881740 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.881766 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.881784 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.884956 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.904609 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.920257 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.942814 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.965313 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.984379 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:33Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.985106 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.985150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.985162 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.985182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:33 crc kubenswrapper[4982]: I0224 14:50:33.985194 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:33Z","lastTransitionTime":"2026-02-24T14:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.017744 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:32Z\\\",\\\"message\\\":\\\"224 14:50:32.543900 6865 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:32.543965 6865 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:32.544616 6865 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:32.544706 6865 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:32.544752 6865 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:32.544841 6865 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:32.544884 6865 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:32.544933 6865 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:32.544983 6865 factory.go:656] Stopping watch factory\\\\nI0224 14:50:32.545045 6865 ovnkube.go:599] Stopped ovnkube\\\\nI0224 14:50:32.545118 6865 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:32.545169 6865 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:32.545172 6865 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:32.545251 6865 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:32.545185 6865 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.032774 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.087447 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.087536 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.087554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.087579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.087596 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.144562 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.144661 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.144679 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.144569 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:34 crc kubenswrapper[4982]: E0224 14:50:34.144793 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:34 crc kubenswrapper[4982]: E0224 14:50:34.144857 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:34 crc kubenswrapper[4982]: E0224 14:50:34.144954 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:34 crc kubenswrapper[4982]: E0224 14:50:34.145041 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.190554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.190596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.190607 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.190622 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.190636 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.292943 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.292972 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.292979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.292992 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.293002 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.395226 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.395266 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.395277 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.395294 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.395306 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.497562 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.497595 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.497603 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.497620 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.497630 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.600792 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.600868 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.600886 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.601294 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.601345 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.704681 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.704741 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.704757 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.704818 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.704871 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.713369 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/0.log" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.717190 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.717765 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.736481 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.774314 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:32Z\\\",\\\"message\\\":\\\"224 14:50:32.543900 6865 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:32.543965 6865 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:32.544616 6865 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:32.544706 6865 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:32.544752 6865 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:32.544841 6865 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:32.544884 6865 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:32.544933 6865 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:32.544983 6865 factory.go:656] Stopping watch factory\\\\nI0224 14:50:32.545045 6865 ovnkube.go:599] Stopped ovnkube\\\\nI0224 14:50:32.545118 6865 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:32.545169 6865 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:32.545172 6865 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:32.545251 6865 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:32.545185 6865 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.796299 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.807089 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.807139 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.807150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.807167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.807178 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.813672 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.834812 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.852854 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.873618 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.888583 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.909015 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.909684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.909726 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.909738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.909756 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.909768 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:34Z","lastTransitionTime":"2026-02-24T14:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.929469 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.950420 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.971223 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:34 crc kubenswrapper[4982]: I0224 14:50:34.989448 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:34Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.012964 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.013026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.013043 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.013070 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.013088 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.013355 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.035587 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.051734 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.115082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.115142 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.115160 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.115182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.115200 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.217861 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.217925 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.217947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.217975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.217995 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.320972 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.321036 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.321053 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.321077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.321094 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.423945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.424023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.424046 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.424076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.424100 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.527228 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.527277 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.527295 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.527317 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.527334 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.631409 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.631458 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.631475 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.631506 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.631557 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.723490 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/1.log" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.724226 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/0.log" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.728741 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd" exitCode=1 Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.728809 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.728870 4982 scope.go:117] "RemoveContainer" containerID="2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.732191 4982 scope.go:117] "RemoveContainer" containerID="fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd" Feb 24 14:50:35 crc kubenswrapper[4982]: E0224 14:50:35.733235 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.735704 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.735827 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.735919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.736044 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.736066 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.755086 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.781718 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d9c0a14c2d4f8eb7b57542f98463823646e85059fc80a49e5b6c1335798847f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:32Z\\\",\\\"message\\\":\\\"224 14:50:32.543900 6865 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:32.543965 6865 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:32.544616 6865 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:32.544706 6865 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:32.544752 6865 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:32.544841 6865 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:32.544884 6865 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:32.544933 6865 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:32.544983 6865 factory.go:656] Stopping watch factory\\\\nI0224 14:50:32.545045 6865 ovnkube.go:599] Stopped ovnkube\\\\nI0224 14:50:32.545118 6865 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:32.545169 6865 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:32.545172 6865 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:32.545251 6865 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:32.545185 6865 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:34Z\\\",\\\"message\\\":\\\"EgressIP event handler 8 for removal\\\\nI0224 14:50:34.770448 7010 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:34.770484 7010 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:34.770508 7010 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:34.770544 7010 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:34.770559 7010 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:34.770576 7010 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:34.770856 7010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 14:50:34.771232 7010 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:34.771256 7010 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:34.771282 7010 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:50:34.771290 7010 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:34.771299 7010 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:50:34.771316 7010 factory.go:656] Stopping watch factory\\\\nI0224 14:50:34.771335 7010 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.799439 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.819879 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.837453 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.839092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.839139 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.839184 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.839202 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.839213 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.857146 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.879662 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.902263 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.928788 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.942760 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.942829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.942842 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.942868 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.942883 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:35Z","lastTransitionTime":"2026-02-24T14:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.948041 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.965976 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:35 crc kubenswrapper[4982]: I0224 14:50:35.990907 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:35Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.007217 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.027103 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.045758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.045841 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.045866 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.045898 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.045923 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.046601 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.066391 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.144826 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:36 crc kubenswrapper[4982]: E0224 14:50:36.145036 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.145021 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.145143 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:36 crc kubenswrapper[4982]: E0224 14:50:36.145597 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:36 crc kubenswrapper[4982]: E0224 14:50:36.145886 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.146082 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:36 crc kubenswrapper[4982]: E0224 14:50:36.146400 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.148612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.148912 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.148945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.149013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.149031 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.252589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.252660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.252677 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.252708 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.252729 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.356082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.356158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.356185 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.356216 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.356237 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.459145 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.459201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.459218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.459246 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.459264 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.562645 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.562706 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.562718 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.562735 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.562748 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.665177 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.665239 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.665256 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.665278 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.665292 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.735126 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/1.log" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.740476 4982 scope.go:117] "RemoveContainer" containerID="fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd" Feb 24 14:50:36 crc kubenswrapper[4982]: E0224 14:50:36.740831 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.761039 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.768121 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.768183 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.768201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.768228 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.768247 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.779108 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.794695 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.810686 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.829847 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.846675 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.862209 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.871396 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.871464 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.871490 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.871587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.871607 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.880923 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.908614 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.925840 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.939926 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.956969 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.974150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.974221 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.974247 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.974279 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.974303 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:36Z","lastTransitionTime":"2026-02-24T14:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.977786 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:36 crc kubenswrapper[4982]: I0224 14:50:36.995405 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:36Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.013681 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:37Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.039664 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:34Z\\\",\\\"message\\\":\\\"EgressIP event handler 8 for removal\\\\nI0224 14:50:34.770448 7010 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:34.770484 7010 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:34.770508 7010 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:34.770544 7010 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:34.770559 7010 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:34.770576 7010 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:34.770856 7010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 14:50:34.771232 7010 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:34.771256 7010 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:34.771282 7010 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:50:34.771290 7010 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:34.771299 7010 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:50:34.771316 7010 factory.go:656] Stopping watch factory\\\\nI0224 14:50:34.771335 7010 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:37Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.077941 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.078023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.078046 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.078077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.078106 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.180776 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.180814 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.180828 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.180845 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.180857 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.283846 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.283917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.283938 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.283968 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.283994 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.386829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.386895 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.386912 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.386940 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.386960 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.490689 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.490758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.490779 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.490808 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.490830 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.593495 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.593571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.593585 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.593606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.593620 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.696086 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.696138 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.696161 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.696189 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.696210 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.799867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.799931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.799950 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.799974 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.799997 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.903870 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.903959 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.903979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.904005 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:37 crc kubenswrapper[4982]: I0224 14:50:37.904026 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:37Z","lastTransitionTime":"2026-02-24T14:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.006931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.007011 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.007034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.007064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.007098 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.110183 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.110243 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.110267 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.110298 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.110319 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.144923 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:38 crc kubenswrapper[4982]: E0224 14:50:38.145175 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.144958 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:38 crc kubenswrapper[4982]: E0224 14:50:38.145303 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.144960 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:38 crc kubenswrapper[4982]: E0224 14:50:38.145387 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.144930 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:38 crc kubenswrapper[4982]: E0224 14:50:38.145474 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.213438 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.213544 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.213571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.213603 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.213627 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.317273 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.317345 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.317363 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.317388 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.317405 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.420807 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.420854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.420864 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.420880 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.420891 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.524031 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.524098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.524122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.524155 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.524177 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.627941 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.628028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.628051 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.628082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.628104 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.731548 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.731613 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.731631 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.731664 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.731683 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.834891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.834955 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.834972 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.834997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.835014 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.938703 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.938775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.938799 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.938830 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:38 crc kubenswrapper[4982]: I0224 14:50:38.938855 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:38Z","lastTransitionTime":"2026-02-24T14:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.042056 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.042122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.042145 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.042179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.042201 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.148947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.149578 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.149675 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.149712 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.149748 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.171286 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.189782 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.203995 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.218899 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.233726 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.250917 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.254219 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.254290 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.254310 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.254346 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.254373 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.276172 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.311449 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:34Z\\\",\\\"message\\\":\\\"EgressIP event handler 8 for removal\\\\nI0224 14:50:34.770448 7010 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:34.770484 7010 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:34.770508 7010 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:34.770544 7010 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:34.770559 7010 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:34.770576 7010 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:34.770856 7010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 14:50:34.771232 7010 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:34.771256 7010 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:34.771282 7010 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:50:34.771290 7010 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:34.771299 7010 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:50:34.771316 7010 factory.go:656] Stopping watch factory\\\\nI0224 14:50:34.771335 7010 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.331972 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.353136 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.358140 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.358456 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.358498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.358542 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.358558 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.370470 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.386383 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.408618 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.428938 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.447552 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.464638 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.464688 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.464702 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.464723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.464738 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.465311 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.567823 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.567894 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.567913 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.567942 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.567965 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.670645 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.670732 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.670762 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.670815 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.670848 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.775250 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.775319 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.775338 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.775367 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.775385 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.879769 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.879833 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.879851 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.879883 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.879903 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.983395 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.983466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.983485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.983549 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:39 crc kubenswrapper[4982]: I0224 14:50:39.983571 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:39Z","lastTransitionTime":"2026-02-24T14:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.037112 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.037386 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:51:12.037347101 +0000 UTC m=+133.656405624 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.037466 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.037576 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.037654 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.037694 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.037734 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.037842 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:51:12.037817843 +0000 UTC m=+133.656876366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.037917 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.037954 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.037950 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.038000 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.038083 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.038107 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.038131 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:51:12.038089781 +0000 UTC m=+133.657148304 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.037984 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.038208 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:51:12.038176024 +0000 UTC m=+133.657234557 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.038239 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:51:12.038224515 +0000 UTC m=+133.657283048 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.087623 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.087694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.087713 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.087738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.087763 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.139314 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.139466 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.139548 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs podName:99337e5a-7ecb-4ed1-8ec5-14979be84e68 nodeName:}" failed. No retries permitted until 2026-02-24 14:51:12.13953191 +0000 UTC m=+133.758590403 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs") pod "network-metrics-daemon-6gwqq" (UID: "99337e5a-7ecb-4ed1-8ec5-14979be84e68") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.146547 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.146659 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.146714 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.146784 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.146852 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.146995 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.147179 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.147359 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.192038 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.192117 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.192142 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.192177 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.192202 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.295544 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.295623 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.295650 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.295681 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.295704 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.398842 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.398908 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.398926 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.398950 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.398970 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.502356 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.502414 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.502431 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.502457 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.502475 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.606002 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.606058 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.606075 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.606100 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.606118 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.708991 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.709073 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.709093 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.709118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.709136 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.812380 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.812445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.812467 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.812493 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.812549 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.894475 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.894598 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.894616 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.894662 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.894681 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.919286 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:40Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.926092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.926164 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.926183 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.926212 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.926231 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.949343 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:40Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.955828 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.955899 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.955920 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.956070 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.956102 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:40 crc kubenswrapper[4982]: E0224 14:50:40.982796 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:40Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.988672 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.988738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.988755 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.988782 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:40 crc kubenswrapper[4982]: I0224 14:50:40.988804 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:40Z","lastTransitionTime":"2026-02-24T14:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: E0224 14:50:41.015560 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:41Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.021440 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.021542 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.021556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.021574 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.021587 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: E0224 14:50:41.043839 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:41Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:41 crc kubenswrapper[4982]: E0224 14:50:41.044044 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.046706 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.046756 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.046769 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.046791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.046813 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.150309 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.150375 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.150394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.150418 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.150436 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.167199 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.253916 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.254008 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.254025 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.254049 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.254065 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.358870 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.358938 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.358957 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.358984 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.359004 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.466026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.466133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.466157 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.466191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.466221 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.570171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.570250 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.570273 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.570304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.570327 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.674664 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.674733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.674752 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.674780 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.674805 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.778206 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.778254 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.778269 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.778290 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.778308 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.881398 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.881593 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.881615 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.881646 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.881668 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.985336 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.985377 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.985386 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.985403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:41 crc kubenswrapper[4982]: I0224 14:50:41.985414 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:41Z","lastTransitionTime":"2026-02-24T14:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.088647 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.088729 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.088758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.088791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.088813 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.144919 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.144981 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.144977 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.145073 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:42 crc kubenswrapper[4982]: E0224 14:50:42.145129 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:42 crc kubenswrapper[4982]: E0224 14:50:42.145352 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:42 crc kubenswrapper[4982]: E0224 14:50:42.145459 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:42 crc kubenswrapper[4982]: E0224 14:50:42.145620 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.192934 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.193037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.193061 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.193171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.193204 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.296211 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.296270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.296287 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.296313 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.296330 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.399030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.399103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.399128 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.399158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.399178 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.503125 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.503211 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.503231 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.503265 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.503286 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.606801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.606866 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.606890 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.606920 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.606947 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.710667 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.710728 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.710747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.710772 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.710793 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.813803 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.813847 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.813862 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.813884 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.813902 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.917026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.917096 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.917115 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.917139 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:42 crc kubenswrapper[4982]: I0224 14:50:42.917160 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:42Z","lastTransitionTime":"2026-02-24T14:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.020332 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.020406 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.020428 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.020485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.020566 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.123622 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.123680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.123696 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.123722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.123741 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.226302 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.226368 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.226385 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.226411 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.226428 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.330043 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.330111 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.330128 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.330152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.330174 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.432891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.432951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.432974 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.433023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.433066 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.536487 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.536577 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.536595 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.536618 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.536635 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.638876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.638923 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.638942 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.638965 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.638981 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.742643 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.742690 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.742702 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.742721 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.742732 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.845917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.845969 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.845985 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.846009 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.846027 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.949078 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.949160 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.949183 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.949216 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:43 crc kubenswrapper[4982]: I0224 14:50:43.949238 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:43Z","lastTransitionTime":"2026-02-24T14:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.052284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.052355 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.052379 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.052409 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.052433 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.145058 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.145107 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.145174 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.145080 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:44 crc kubenswrapper[4982]: E0224 14:50:44.145274 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:44 crc kubenswrapper[4982]: E0224 14:50:44.145495 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:44 crc kubenswrapper[4982]: E0224 14:50:44.145654 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:44 crc kubenswrapper[4982]: E0224 14:50:44.145771 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.155674 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.155724 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.155748 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.155779 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.155802 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.259882 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.259939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.259957 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.259979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.259997 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.362943 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.363004 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.363021 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.363046 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.363063 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.466319 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.466452 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.466552 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.466584 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.466607 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.569869 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.569949 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.569975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.570004 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.570025 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.672951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.673011 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.673029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.673054 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.673071 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.775568 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.775631 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.775650 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.775673 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.775690 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.879160 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.879233 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.879252 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.879277 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.879296 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.982971 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.983030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.983047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.983071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:44 crc kubenswrapper[4982]: I0224 14:50:44.983090 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:44Z","lastTransitionTime":"2026-02-24T14:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.087147 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.087225 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.087249 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.087279 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.087301 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.190731 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.190840 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.190857 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.190881 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.190900 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.293978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.294057 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.294082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.294113 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.294135 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.397745 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.397802 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.397820 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.397845 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.397865 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.500833 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.500906 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.500923 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.500951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.500968 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.603987 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.604067 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.604085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.604112 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.604132 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.712801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.712862 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.712878 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.712900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.712914 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.815579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.815654 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.815671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.815697 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.815722 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.918900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.918962 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.918979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.919003 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:45 crc kubenswrapper[4982]: I0224 14:50:45.919022 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:45Z","lastTransitionTime":"2026-02-24T14:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.022047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.022118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.022135 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.022159 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.022176 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.125210 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.125270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.125285 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.125310 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.125327 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.144870 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.144952 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:46 crc kubenswrapper[4982]: E0224 14:50:46.145065 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.145118 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.145139 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:46 crc kubenswrapper[4982]: E0224 14:50:46.145288 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:46 crc kubenswrapper[4982]: E0224 14:50:46.145647 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:46 crc kubenswrapper[4982]: E0224 14:50:46.145713 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.232164 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.232251 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.232272 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.232300 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.232327 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.336367 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.336438 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.336455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.336480 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.336531 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.439671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.439813 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.439834 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.439896 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.439915 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.542591 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.542667 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.542691 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.542722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.542748 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.645023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.645092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.645111 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.645137 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.645155 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.748307 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.748376 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.748392 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.748418 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.748442 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.851790 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.851848 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.851865 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.851889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.851906 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.955783 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.955853 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.955870 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.955895 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:46 crc kubenswrapper[4982]: I0224 14:50:46.955914 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:46Z","lastTransitionTime":"2026-02-24T14:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.059485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.059621 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.059644 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.059675 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.059697 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.162000 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.162047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.162058 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.162073 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.162085 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.265169 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.265229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.265245 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.265268 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.265286 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.367795 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.367871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.367891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.367921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.367943 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.471208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.471270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.471288 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.471316 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.471334 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.575188 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.575285 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.575305 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.575331 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.575356 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.678082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.678183 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.678204 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.678225 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.678240 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.782274 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.782343 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.782360 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.782385 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.782403 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.885413 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.885478 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.885522 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.885549 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.885568 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.988565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.988606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.988624 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.988647 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:47 crc kubenswrapper[4982]: I0224 14:50:47.988665 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:47Z","lastTransitionTime":"2026-02-24T14:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.091973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.092099 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.092121 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.092150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.092172 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.144395 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.144437 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.144410 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.144395 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:48 crc kubenswrapper[4982]: E0224 14:50:48.144551 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:48 crc kubenswrapper[4982]: E0224 14:50:48.144614 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:48 crc kubenswrapper[4982]: E0224 14:50:48.144678 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:48 crc kubenswrapper[4982]: E0224 14:50:48.144847 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.194925 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.195017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.195040 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.195069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.195095 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.298213 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.298267 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.298284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.298306 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.298322 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.401047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.401281 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.401346 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.401423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.401480 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.513383 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.513462 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.513481 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.513560 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.513588 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.617592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.617971 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.618133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.618275 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.618420 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.726538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.726604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.726623 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.726650 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.726677 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.829982 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.830305 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.830557 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.830794 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.831014 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.933781 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.934182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.934342 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.934481 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:48 crc kubenswrapper[4982]: I0224 14:50:48.934669 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:48Z","lastTransitionTime":"2026-02-24T14:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.037726 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.038082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.038226 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.038365 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.038546 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.141989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.142047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.142069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.142098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.142122 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.145848 4982 scope.go:117] "RemoveContainer" containerID="fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.185211 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.207535 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.226042 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.245786 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.245851 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.245867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.245892 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.245910 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.247917 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.268696 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.291733 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.317445 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.339861 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.349732 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.349785 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.349802 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.349824 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.349842 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.358002 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.380399 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.411208 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:34Z\\\",\\\"message\\\":\\\"EgressIP event handler 8 for removal\\\\nI0224 14:50:34.770448 7010 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:34.770484 7010 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:34.770508 7010 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:34.770544 7010 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:34.770559 7010 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:34.770576 7010 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:34.770856 7010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 14:50:34.771232 7010 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:34.771256 7010 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:34.771282 7010 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:50:34.771290 7010 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:34.771299 7010 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:50:34.771316 7010 factory.go:656] Stopping watch factory\\\\nI0224 14:50:34.771335 7010 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.432341 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.451173 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.454382 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.454443 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.454460 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.454485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.454536 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.488456 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.505223 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.520445 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.532661 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.558024 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.558075 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.558093 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.558118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.558136 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.662531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.662573 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.662587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.662606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.662622 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.765296 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.765341 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.765350 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.765365 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.765374 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.794786 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/1.log" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.797772 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.798395 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.826343 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.842769 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.857657 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.867196 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.867266 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.867286 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.867311 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.867332 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.876465 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.892420 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.905298 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.930765 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.950261 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.970072 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.970098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.970107 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.970120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.970130 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:49Z","lastTransitionTime":"2026-02-24T14:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.970160 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:49 crc kubenswrapper[4982]: I0224 14:50:49.991731 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:49Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.017679 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:34Z\\\",\\\"message\\\":\\\"EgressIP event handler 8 for removal\\\\nI0224 14:50:34.770448 7010 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:34.770484 7010 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:34.770508 7010 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:34.770544 7010 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:34.770559 7010 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:34.770576 7010 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:34.770856 7010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 14:50:34.771232 7010 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:34.771256 7010 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:34.771282 7010 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:50:34.771290 7010 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:34.771299 7010 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:50:34.771316 7010 factory.go:656] Stopping watch factory\\\\nI0224 14:50:34.771335 7010 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.036589 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.050802 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.072681 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.072731 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.072745 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.072767 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.072783 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.081729 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.107318 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.124945 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.137118 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.145307 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.145328 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.145348 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.145394 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:50 crc kubenswrapper[4982]: E0224 14:50:50.145429 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:50 crc kubenswrapper[4982]: E0224 14:50:50.145592 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:50 crc kubenswrapper[4982]: E0224 14:50:50.145771 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:50 crc kubenswrapper[4982]: E0224 14:50:50.145854 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.174990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.175025 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.175034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.175048 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.175057 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.277142 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.277232 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.277250 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.277274 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.277291 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.380063 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.380165 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.380187 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.380215 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.380232 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.483087 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.483133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.483147 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.483168 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.483181 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.586155 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.586207 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.586218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.586235 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.586247 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.689018 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.689081 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.689099 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.689123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.689141 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.792486 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.792589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.792605 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.792628 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.792645 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.804109 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/2.log" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.805238 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/1.log" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.809959 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015" exitCode=1 Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.810039 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.810130 4982 scope.go:117] "RemoveContainer" containerID="fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.811377 4982 scope.go:117] "RemoveContainer" containerID="82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015" Feb 24 14:50:50 crc kubenswrapper[4982]: E0224 14:50:50.811751 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.845494 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.864649 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.881555 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.895756 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.895789 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.895797 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.895813 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.895823 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.907438 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.928427 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.947220 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.971352 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.996256 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:50Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.998180 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.998252 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.998275 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.998304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:50 crc kubenswrapper[4982]: I0224 14:50:50.998326 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:50Z","lastTransitionTime":"2026-02-24T14:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.013710 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.036129 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.069485 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdd2c3f90fcf44c1be4930cd0812e3037073ef88a1ff92c154d8b1732263fccd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:34Z\\\",\\\"message\\\":\\\"EgressIP event handler 8 for removal\\\\nI0224 14:50:34.770448 7010 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:34.770484 7010 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:34.770508 7010 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:34.770544 7010 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:34.770559 7010 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:50:34.770576 7010 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:34.770856 7010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 14:50:34.771232 7010 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:50:34.771256 7010 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:50:34.771282 7010 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:50:34.771290 7010 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:34.771299 7010 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:50:34.771316 7010 factory.go:656] Stopping watch factory\\\\nI0224 14:50:34.771335 7010 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:50Z\\\",\\\"message\\\":\\\":160\\\\nI0224 14:50:50.172915 7211 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:50.173226 7211 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:50.173270 7211 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:50.173281 7211 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:50.173307 7211 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:50.173328 7211 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:50.173379 7211 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:50.173424 7211 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:50.173424 7211 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 14:50:50.173444 7211 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:50.173451 7211 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 14:50:50.173468 7211 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:50:50.173482 7211 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:50.173530 7211 factory.go:656] Stopping watch factory\\\\nI0224 14:50:50.173552 7211 ovnkube.go:599] Stopped ovnkube\\\\nI0224 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.087574 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.101420 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.101542 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.101599 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.101627 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.101644 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.104677 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.125747 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.148089 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.158986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.159060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.159082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.159111 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.159134 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.166989 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: E0224 14:50:51.180963 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.184535 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.186732 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.186797 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.186817 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.186896 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.186941 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: E0224 14:50:51.207218 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.211854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.211915 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.211932 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.211959 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.211977 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: E0224 14:50:51.231209 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.236623 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.236681 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.236697 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.236721 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.236738 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: E0224 14:50:51.257675 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.262178 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.262204 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.262211 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.262223 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.262231 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: E0224 14:50:51.280053 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: E0224 14:50:51.280157 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.281458 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.281531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.281551 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.281571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.281587 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.384085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.384149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.384170 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.384194 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.384212 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.487474 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.487571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.487588 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.487613 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.487629 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.590179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.590231 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.590243 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.590261 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.590273 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.694232 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.694332 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.694357 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.694386 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.694433 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.797529 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.797622 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.797640 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.797690 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.797708 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.827368 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/2.log" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.835423 4982 scope.go:117] "RemoveContainer" containerID="82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015" Feb 24 14:50:51 crc kubenswrapper[4982]: E0224 14:50:51.835735 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.856621 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.874900 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.893804 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.901218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.901268 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.901285 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.901309 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.901327 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:51Z","lastTransitionTime":"2026-02-24T14:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.918819 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.940880 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.966220 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:51 crc kubenswrapper[4982]: I0224 14:50:51.985064 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:51Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.004984 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.005108 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.005132 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.005162 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.005184 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.027215 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.058443 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.083221 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.102327 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.109237 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.109315 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.109339 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.109370 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.109396 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.126653 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.144878 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.144957 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:52 crc kubenswrapper[4982]: E0224 14:50:52.145070 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.145104 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:52 crc kubenswrapper[4982]: E0224 14:50:52.145252 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:52 crc kubenswrapper[4982]: E0224 14:50:52.145389 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.145409 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:52 crc kubenswrapper[4982]: E0224 14:50:52.145564 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.148351 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.166424 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.187587 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.212459 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.212599 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.212627 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.212658 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.212680 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.216973 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:50Z\\\",\\\"message\\\":\\\":160\\\\nI0224 14:50:50.172915 7211 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:50.173226 7211 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:50.173270 7211 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:50.173281 7211 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:50.173307 7211 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:50.173328 7211 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:50.173379 7211 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:50.173424 7211 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:50.173424 7211 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 14:50:50.173444 7211 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:50.173451 7211 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 14:50:50.173468 7211 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:50:50.173482 7211 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:50.173530 7211 factory.go:656] Stopping watch factory\\\\nI0224 14:50:50.173552 7211 ovnkube.go:599] Stopped ovnkube\\\\nI0224 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.234092 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:52Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.316357 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.316424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.316442 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.316468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.316486 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.418638 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.418685 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.418702 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.418725 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.418742 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.521570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.521597 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.521604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.521616 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.521624 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.624102 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.624319 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.624338 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.624368 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.624386 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.727508 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.727547 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.727558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.727576 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.727585 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.831104 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.831149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.831157 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.831172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.831181 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.934423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.934490 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.934549 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.934578 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:52 crc kubenswrapper[4982]: I0224 14:50:52.934619 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:52Z","lastTransitionTime":"2026-02-24T14:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.038578 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.038645 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.038663 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.038689 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.038707 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.141598 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.141651 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.141667 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.141689 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.141706 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.244613 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.244677 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.244695 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.244722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.244739 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.348775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.348863 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.348886 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.348917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.348939 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.451785 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.451867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.451886 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.451911 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.451929 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.559493 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.559620 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.559641 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.559668 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.559695 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.663166 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.663230 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.663246 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.663279 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.663298 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.766877 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.767276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.767295 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.767320 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.767339 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.869873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.869966 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.869983 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.870006 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.870024 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.973017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.973085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.973101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.973121 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:53 crc kubenswrapper[4982]: I0224 14:50:53.973135 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:53Z","lastTransitionTime":"2026-02-24T14:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.076062 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.076103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.076149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.076165 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.076174 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.145407 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.145483 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.145485 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.145432 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:54 crc kubenswrapper[4982]: E0224 14:50:54.145619 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:54 crc kubenswrapper[4982]: E0224 14:50:54.145680 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:54 crc kubenswrapper[4982]: E0224 14:50:54.145825 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:54 crc kubenswrapper[4982]: E0224 14:50:54.145918 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.179545 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.179591 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.179600 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.179613 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.179622 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.283990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.284069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.284089 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.284124 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.284145 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.387560 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.387617 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.387631 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.387649 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.387660 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.490336 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.490404 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.490422 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.490445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.490461 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.594098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.594490 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.594678 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.594818 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.595001 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.698333 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.698392 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.698409 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.698432 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.698448 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.801654 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.802172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.802384 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.802639 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.802858 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.906426 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.906554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.906573 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.906592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:54 crc kubenswrapper[4982]: I0224 14:50:54.906647 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:54Z","lastTransitionTime":"2026-02-24T14:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.009468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.009558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.009576 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.009598 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.009617 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.112330 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.112385 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.112405 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.112431 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.112448 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.215410 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.215469 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.215482 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.215523 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.215536 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.318888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.318946 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.318966 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.318991 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.319008 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.422067 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.422134 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.422157 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.422191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.422216 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.524811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.524853 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.524899 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.524915 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.524926 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.628276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.628822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.629041 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.629249 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.629464 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.734013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.734091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.734108 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.734136 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.734154 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.838352 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.838401 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.838421 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.838444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.838462 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.942859 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.942917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.942939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.942964 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:55 crc kubenswrapper[4982]: I0224 14:50:55.942983 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:55Z","lastTransitionTime":"2026-02-24T14:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.046097 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.046155 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.046171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.046193 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.046211 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.145275 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.145332 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.145377 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.145431 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:56 crc kubenswrapper[4982]: E0224 14:50:56.146234 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:56 crc kubenswrapper[4982]: E0224 14:50:56.146364 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:56 crc kubenswrapper[4982]: E0224 14:50:56.146536 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:56 crc kubenswrapper[4982]: E0224 14:50:56.146676 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.148799 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.148847 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.148864 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.148887 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.148903 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.253348 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.253412 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.253643 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.253666 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.253686 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.358428 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.358494 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.358561 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.358585 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.358637 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.466669 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.466778 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.466808 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.466835 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.466854 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.570298 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.570357 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.570382 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.570403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.570420 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.672792 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.672844 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.672861 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.672881 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.672899 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.775814 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.775846 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.775856 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.775871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.775882 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.878662 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.878722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.878742 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.878766 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.878788 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.985998 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.986059 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.986077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.986105 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:56 crc kubenswrapper[4982]: I0224 14:50:56.986124 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:56Z","lastTransitionTime":"2026-02-24T14:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.089370 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.089442 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.089466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.089539 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.089566 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.192890 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.192959 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.192978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.193015 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.193035 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.296948 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.297029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.297050 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.297081 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.297104 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.400586 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.400665 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.400684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.400713 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.400735 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.504304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.504378 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.504396 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.504424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.504443 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.611458 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.611583 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.611601 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.611625 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.611645 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.714975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.715035 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.715048 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.715069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.715083 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.818218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.818284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.818306 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.818335 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.818356 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.921727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.921826 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.921843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.921900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:57 crc kubenswrapper[4982]: I0224 14:50:57.921921 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:57Z","lastTransitionTime":"2026-02-24T14:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.026187 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.026239 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.026256 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.026280 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.026296 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.128814 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.128874 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.128892 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.128919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.128937 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.145454 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.145540 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:50:58 crc kubenswrapper[4982]: E0224 14:50:58.145643 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.145655 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.145692 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:50:58 crc kubenswrapper[4982]: E0224 14:50:58.145782 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:50:58 crc kubenswrapper[4982]: E0224 14:50:58.145871 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:50:58 crc kubenswrapper[4982]: E0224 14:50:58.146012 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.231736 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.231793 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.231811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.231835 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.231885 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.335344 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.335381 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.335394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.335411 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.335424 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.438596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.438660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.438680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.438705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.438722 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.541762 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.542070 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.542254 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.542395 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.542576 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.645956 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.646589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.646614 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.646640 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.646657 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.749641 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.749691 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.749707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.749729 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.749747 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.853065 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.853133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.853156 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.853179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.853196 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.956429 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.956533 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.956564 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.956592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:58 crc kubenswrapper[4982]: I0224 14:50:58.956613 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:58Z","lastTransitionTime":"2026-02-24T14:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.064018 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.064102 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.064123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.064215 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.064374 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:50:59Z","lastTransitionTime":"2026-02-24T14:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:50:59 crc kubenswrapper[4982]: E0224 14:50:59.164830 4982 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.167941 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.187726 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.205460 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.234390 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.255651 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.272290 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: E0224 14:50:59.282327 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.301969 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.333622 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:50Z\\\",\\\"message\\\":\\\":160\\\\nI0224 14:50:50.172915 7211 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:50.173226 7211 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:50.173270 7211 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:50.173281 7211 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:50.173307 7211 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:50.173328 7211 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:50.173379 7211 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:50.173424 7211 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:50.173424 7211 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 14:50:50.173444 7211 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:50.173451 7211 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 14:50:50.173468 7211 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:50:50.173482 7211 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:50.173530 7211 factory.go:656] Stopping watch factory\\\\nI0224 14:50:50.173552 7211 ovnkube.go:599] Stopped ovnkube\\\\nI0224 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.355133 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.374673 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.400190 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.424875 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.444971 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.464222 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.501776 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.527440 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:50:59 crc kubenswrapper[4982]: I0224 14:50:59.544782 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:50:59Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:00 crc kubenswrapper[4982]: I0224 14:51:00.144965 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:00 crc kubenswrapper[4982]: I0224 14:51:00.145048 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:00 crc kubenswrapper[4982]: I0224 14:51:00.145091 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:00 crc kubenswrapper[4982]: I0224 14:51:00.145064 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:00 crc kubenswrapper[4982]: E0224 14:51:00.145156 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:00 crc kubenswrapper[4982]: E0224 14:51:00.145263 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:00 crc kubenswrapper[4982]: E0224 14:51:00.145322 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:00 crc kubenswrapper[4982]: E0224 14:51:00.145404 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.318947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.319018 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.319035 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.319061 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.319082 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:01Z","lastTransitionTime":"2026-02-24T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:01 crc kubenswrapper[4982]: E0224 14:51:01.341489 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:01Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.347232 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.347283 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.347304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.347330 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.347348 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:01Z","lastTransitionTime":"2026-02-24T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:01 crc kubenswrapper[4982]: E0224 14:51:01.371738 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:01Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.378318 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.378366 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.378383 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.378407 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.378424 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:01Z","lastTransitionTime":"2026-02-24T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:01 crc kubenswrapper[4982]: E0224 14:51:01.399699 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:01Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.405292 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.405351 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.405371 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.405399 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.405417 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:01Z","lastTransitionTime":"2026-02-24T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:01 crc kubenswrapper[4982]: E0224 14:51:01.428584 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:01Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.434445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.434546 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.434565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.434588 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:01 crc kubenswrapper[4982]: I0224 14:51:01.434606 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:01Z","lastTransitionTime":"2026-02-24T14:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:01 crc kubenswrapper[4982]: E0224 14:51:01.459699 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:01Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:01 crc kubenswrapper[4982]: E0224 14:51:01.459935 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:51:02 crc kubenswrapper[4982]: I0224 14:51:02.145001 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:02 crc kubenswrapper[4982]: I0224 14:51:02.145142 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:02 crc kubenswrapper[4982]: E0224 14:51:02.145215 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:02 crc kubenswrapper[4982]: E0224 14:51:02.145357 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:02 crc kubenswrapper[4982]: I0224 14:51:02.145391 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:02 crc kubenswrapper[4982]: I0224 14:51:02.145473 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:02 crc kubenswrapper[4982]: E0224 14:51:02.145717 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:02 crc kubenswrapper[4982]: E0224 14:51:02.145931 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:02 crc kubenswrapper[4982]: I0224 14:51:02.146936 4982 scope.go:117] "RemoveContainer" containerID="82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015" Feb 24 14:51:02 crc kubenswrapper[4982]: E0224 14:51:02.147183 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:51:04 crc kubenswrapper[4982]: I0224 14:51:04.145103 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:04 crc kubenswrapper[4982]: I0224 14:51:04.145262 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:04 crc kubenswrapper[4982]: I0224 14:51:04.145297 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:04 crc kubenswrapper[4982]: E0224 14:51:04.145460 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:04 crc kubenswrapper[4982]: I0224 14:51:04.145592 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:04 crc kubenswrapper[4982]: E0224 14:51:04.145667 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:04 crc kubenswrapper[4982]: E0224 14:51:04.145711 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:04 crc kubenswrapper[4982]: E0224 14:51:04.145762 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:04 crc kubenswrapper[4982]: I0224 14:51:04.161294 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 24 14:51:04 crc kubenswrapper[4982]: E0224 14:51:04.283313 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:06 crc kubenswrapper[4982]: I0224 14:51:06.144816 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:06 crc kubenswrapper[4982]: E0224 14:51:06.144970 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:06 crc kubenswrapper[4982]: I0224 14:51:06.145004 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:06 crc kubenswrapper[4982]: I0224 14:51:06.145050 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:06 crc kubenswrapper[4982]: E0224 14:51:06.145193 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:06 crc kubenswrapper[4982]: E0224 14:51:06.145328 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:06 crc kubenswrapper[4982]: I0224 14:51:06.144954 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:06 crc kubenswrapper[4982]: E0224 14:51:06.145461 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.144657 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.144703 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.144759 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.144838 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:08 crc kubenswrapper[4982]: E0224 14:51:08.144849 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:08 crc kubenswrapper[4982]: E0224 14:51:08.144964 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:08 crc kubenswrapper[4982]: E0224 14:51:08.145059 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:08 crc kubenswrapper[4982]: E0224 14:51:08.145209 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.902890 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/0.log" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.903011 4982 generic.go:334] "Generic (PLEG): container finished" podID="86687a8a-6996-44fa-a62e-b43266c31922" containerID="daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa" exitCode=1 Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.903072 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgtdj" event={"ID":"86687a8a-6996-44fa-a62e-b43266c31922","Type":"ContainerDied","Data":"daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa"} Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.903716 4982 scope.go:117] "RemoveContainer" containerID="daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.937087 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:08Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.972554 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:50Z\\\",\\\"message\\\":\\\":160\\\\nI0224 14:50:50.172915 7211 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:50.173226 7211 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:50.173270 7211 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:50.173281 7211 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:50.173307 7211 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:50.173328 7211 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:50.173379 7211 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:50.173424 7211 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:50.173424 7211 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 14:50:50.173444 7211 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:50.173451 7211 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 14:50:50.173468 7211 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:50:50.173482 7211 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:50.173530 7211 factory.go:656] Stopping watch factory\\\\nI0224 14:50:50.173552 7211 ovnkube.go:599] Stopped ovnkube\\\\nI0224 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:08Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:08 crc kubenswrapper[4982]: I0224 14:51:08.994388 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:08Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.014603 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.036969 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.058369 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.085115 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.103618 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.138428 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.162472 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.180225 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.197731 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.212940 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.234317 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.251878 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.269121 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: E0224 14:51:09.284105 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.296216 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.320137 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.343378 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.364968 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.383199 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.418455 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.443825 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.465166 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.489087 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.516818 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.535773 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.554211 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.580037 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:50Z\\\",\\\"message\\\":\\\":160\\\\nI0224 14:50:50.172915 7211 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:50.173226 7211 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:50.173270 7211 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:50.173281 7211 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:50.173307 7211 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:50.173328 7211 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:50.173379 7211 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:50.173424 7211 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:50.173424 7211 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 14:50:50.173444 7211 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:50.173451 7211 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 14:50:50.173468 7211 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:50:50.173482 7211 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:50.173530 7211 factory.go:656] Stopping watch factory\\\\nI0224 14:50:50.173552 7211 ovnkube.go:599] Stopped ovnkube\\\\nI0224 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.594044 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.615602 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.631714 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.646144 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.664074 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.676618 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.693896 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.911946 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/0.log" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.912057 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgtdj" event={"ID":"86687a8a-6996-44fa-a62e-b43266c31922","Type":"ContainerStarted","Data":"1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae"} Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.948798 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.970448 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:09 crc kubenswrapper[4982]: I0224 14:51:09.989380 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:09Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.005453 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.023724 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.045084 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.065392 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.079092 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.104138 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.128219 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.144689 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.144795 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.144927 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:10 crc kubenswrapper[4982]: E0224 14:51:10.144920 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.144721 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:10 crc kubenswrapper[4982]: E0224 14:51:10.145114 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:10 crc kubenswrapper[4982]: E0224 14:51:10.145286 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:10 crc kubenswrapper[4982]: E0224 14:51:10.145388 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.150905 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.175950 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:50Z\\\",\\\"message\\\":\\\":160\\\\nI0224 14:50:50.172915 7211 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:50.173226 7211 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:50.173270 7211 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:50.173281 7211 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:50.173307 7211 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:50.173328 7211 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:50.173379 7211 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:50.173424 7211 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:50.173424 7211 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 14:50:50.173444 7211 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:50.173451 7211 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 14:50:50.173468 7211 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:50:50.173482 7211 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:50.173530 7211 factory.go:656] Stopping watch factory\\\\nI0224 14:50:50.173552 7211 ovnkube.go:599] Stopped ovnkube\\\\nI0224 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.192474 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.206999 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.222695 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.242490 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.254760 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:10 crc kubenswrapper[4982]: I0224 14:51:10.268609 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:10Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.160966 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.642044 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.642122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.642141 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.642169 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.642191 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:11Z","lastTransitionTime":"2026-02-24T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:11 crc kubenswrapper[4982]: E0224 14:51:11.666760 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:11Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.673465 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.673565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.673585 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.673611 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.673629 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:11Z","lastTransitionTime":"2026-02-24T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:11 crc kubenswrapper[4982]: E0224 14:51:11.695419 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:11Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.701587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.701642 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.701665 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.701697 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.701721 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:11Z","lastTransitionTime":"2026-02-24T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:11 crc kubenswrapper[4982]: E0224 14:51:11.722665 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:11Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.729985 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.730067 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.730095 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.730129 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.730157 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:11Z","lastTransitionTime":"2026-02-24T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:11 crc kubenswrapper[4982]: E0224 14:51:11.751504 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:11Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.756107 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.756163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.756182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.756206 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:11 crc kubenswrapper[4982]: I0224 14:51:11.756222 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:11Z","lastTransitionTime":"2026-02-24T14:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:11 crc kubenswrapper[4982]: E0224 14:51:11.776478 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:11Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:11 crc kubenswrapper[4982]: E0224 14:51:11.776788 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.120009 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.120152 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120216 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:16.120181186 +0000 UTC m=+197.739239689 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.120279 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120340 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120371 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.120375 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120390 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.120424 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120466 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:52:16.120438592 +0000 UTC m=+197.739497125 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120608 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120632 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120656 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120699 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120643 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120753 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:52:16.120710579 +0000 UTC m=+197.739769122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120859 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:52:16.120836012 +0000 UTC m=+197.739894715 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.120903 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:52:16.120888635 +0000 UTC m=+197.739947288 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.145152 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.145203 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.145276 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.145526 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.145618 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.145820 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.146024 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.146114 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:12 crc kubenswrapper[4982]: I0224 14:51:12.221602 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.221790 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:51:12 crc kubenswrapper[4982]: E0224 14:51:12.221897 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs podName:99337e5a-7ecb-4ed1-8ec5-14979be84e68 nodeName:}" failed. No retries permitted until 2026-02-24 14:52:16.221869068 +0000 UTC m=+197.840927601 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs") pod "network-metrics-daemon-6gwqq" (UID: "99337e5a-7ecb-4ed1-8ec5-14979be84e68") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 14:51:13 crc kubenswrapper[4982]: I0224 14:51:13.148071 4982 scope.go:117] "RemoveContainer" containerID="82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015" Feb 24 14:51:13 crc kubenswrapper[4982]: I0224 14:51:13.929651 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/2.log" Feb 24 14:51:13 crc kubenswrapper[4982]: I0224 14:51:13.933973 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3"} Feb 24 14:51:13 crc kubenswrapper[4982]: I0224 14:51:13.934830 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:51:13 crc kubenswrapper[4982]: I0224 14:51:13.958046 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:13Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:13 crc kubenswrapper[4982]: I0224 14:51:13.994072 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:50Z\\\",\\\"message\\\":\\\":160\\\\nI0224 14:50:50.172915 7211 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:50.173226 7211 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:50.173270 7211 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:50.173281 7211 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:50.173307 7211 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:50.173328 7211 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:50.173379 7211 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:50.173424 7211 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:50.173424 7211 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 14:50:50.173444 7211 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:50.173451 7211 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 14:50:50.173468 7211 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:50:50.173482 7211 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:50.173530 7211 factory.go:656] Stopping watch factory\\\\nI0224 14:50:50.173552 7211 ovnkube.go:599] Stopped ovnkube\\\\nI0224 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:51:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:13Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.015958 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.035295 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.057675 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.084367 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.100248 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.112878 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.126105 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.137189 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.144563 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.144633 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:14 crc kubenswrapper[4982]: E0224 14:51:14.144681 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:14 crc kubenswrapper[4982]: E0224 14:51:14.144764 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.144643 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.144832 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:14 crc kubenswrapper[4982]: E0224 14:51:14.144838 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:14 crc kubenswrapper[4982]: E0224 14:51:14.145027 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.173351 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.192663 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.214015 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980e79a7-d090-44b6-9b7a-65275c2a6442\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ebbde60bef341e982e1e7fb5c237c5d5eedfdb72c4aa81627ecd582fc60ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4dce29a524baf6b13d3fb3167c75984a0ee416365faae953bf869a657a9061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0821facce4cb4bd700f535ccd91521a033cc2581ef0559a856bd016d0678e7ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.242510 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.262178 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.279568 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: E0224 14:51:14.285442 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.299425 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.325709 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.346567 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.940370 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/3.log" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.941412 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/2.log" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.945269 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3" exitCode=1 Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.945320 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3"} Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.945368 4982 scope.go:117] "RemoveContainer" containerID="82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.946489 4982 scope.go:117] "RemoveContainer" containerID="73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3" Feb 24 14:51:14 crc kubenswrapper[4982]: E0224 14:51:14.946873 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.966783 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:14 crc kubenswrapper[4982]: I0224 14:51:14.990575 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:14Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.010445 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.029302 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.045290 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.078252 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.098463 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.116455 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980e79a7-d090-44b6-9b7a-65275c2a6442\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ebbde60bef341e982e1e7fb5c237c5d5eedfdb72c4aa81627ecd582fc60ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4dce29a524baf6b13d3fb3167c75984a0ee416365faae953bf869a657a9061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0821facce4cb4bd700f535ccd91521a033cc2581ef0559a856bd016d0678e7ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.135741 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.152109 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.171325 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.192757 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.216297 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.234299 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.257928 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.277383 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.297800 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.327620 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82e0d994b8eb16aba033274e9e0e16b1249d5a27d13a963f2c32d2680c329015\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:50:50Z\\\",\\\"message\\\":\\\":160\\\\nI0224 14:50:50.172915 7211 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0224 14:50:50.173226 7211 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:50:50.173270 7211 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 14:50:50.173281 7211 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:50:50.173307 7211 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 14:50:50.173328 7211 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:50:50.173379 7211 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:50:50.173424 7211 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:50:50.173424 7211 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 14:50:50.173444 7211 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:50:50.173451 7211 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 14:50:50.173468 7211 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:50:50.173482 7211 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:50:50.173530 7211 factory.go:656] Stopping watch factory\\\\nI0224 14:50:50.173552 7211 ovnkube.go:599] Stopped ovnkube\\\\nI0224 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:14Z\\\",\\\"message\\\":\\\"\\\\nI0224 14:51:14.376714 7510 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:51:14.376773 7510 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:51:14.377144 7510 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:51:14.376974 7510 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:51:14.376987 7510 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:51:14.377218 7510 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0224 14:51:14.377234 7510 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:51:14.377250 7510 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:51:14.377243 7510 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:51:14.377475 7510 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:51:14.377526 7510 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:51:14.377552 7510 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:51:14.377579 7510 factory.go:656] Stopping watch factory\\\\nI0224 14:51:14.377595 7510 ovnkube.go:599] Stopped ovnkube\\\\nI0224 14:51:14.377638 7510 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:51:14.377656 7510 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:51:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:51:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.345482 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.951206 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/3.log" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.957222 4982 scope.go:117] "RemoveContainer" containerID="73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3" Feb 24 14:51:15 crc kubenswrapper[4982]: E0224 14:51:15.957455 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:51:15 crc kubenswrapper[4982]: I0224 14:51:15.982110 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:15Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.013308 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:14Z\\\",\\\"message\\\":\\\"\\\\nI0224 14:51:14.376714 7510 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:51:14.376773 7510 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:51:14.377144 7510 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:51:14.376974 7510 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:51:14.376987 7510 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:51:14.377218 7510 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0224 14:51:14.377234 7510 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:51:14.377250 7510 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:51:14.377243 7510 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:51:14.377475 7510 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:51:14.377526 7510 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:51:14.377552 7510 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:51:14.377579 7510 factory.go:656] Stopping watch factory\\\\nI0224 14:51:14.377595 7510 ovnkube.go:599] Stopped ovnkube\\\\nI0224 14:51:14.377638 7510 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:51:14.377656 7510 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:51:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:51:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.031996 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.048275 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.070340 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.091159 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.110062 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.126144 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.144858 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.144896 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.144918 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:16 crc kubenswrapper[4982]: E0224 14:51:16.145055 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.145097 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:16 crc kubenswrapper[4982]: E0224 14:51:16.145176 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:16 crc kubenswrapper[4982]: E0224 14:51:16.145259 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:16 crc kubenswrapper[4982]: E0224 14:51:16.145414 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.159424 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.180845 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.199795 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980e79a7-d090-44b6-9b7a-65275c2a6442\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ebbde60bef341e982e1e7fb5c237c5d5eedfdb72c4aa81627ecd582fc60ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4dce29a524baf6b13d3fb3167c75984a0ee416365faae953bf869a657a9061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0821facce4cb4bd700f535ccd91521a033cc2581ef0559a856bd016d0678e7ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.220145 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.238817 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.255412 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.275692 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.295934 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.314041 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.336938 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:16 crc kubenswrapper[4982]: I0224 14:51:16.356551 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:16Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:18 crc kubenswrapper[4982]: I0224 14:51:18.144530 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:18 crc kubenswrapper[4982]: I0224 14:51:18.144636 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:18 crc kubenswrapper[4982]: E0224 14:51:18.144737 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:18 crc kubenswrapper[4982]: I0224 14:51:18.144649 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:18 crc kubenswrapper[4982]: E0224 14:51:18.144851 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:18 crc kubenswrapper[4982]: I0224 14:51:18.144967 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:18 crc kubenswrapper[4982]: E0224 14:51:18.145094 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:18 crc kubenswrapper[4982]: E0224 14:51:18.145241 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.183104 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.208295 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.229207 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980e79a7-d090-44b6-9b7a-65275c2a6442\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ebbde60bef341e982e1e7fb5c237c5d5eedfdb72c4aa81627ecd582fc60ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4dce29a524baf6b13d3fb3167c75984a0ee416365faae953bf869a657a9061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0821facce4cb4bd700f535ccd91521a033cc2581ef0559a856bd016d0678e7ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.248381 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.263453 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.285152 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: E0224 14:51:19.286553 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.308360 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.328676 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.351893 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.371607 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.389313 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.416433 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.448829 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:14Z\\\",\\\"message\\\":\\\"\\\\nI0224 14:51:14.376714 7510 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:51:14.376773 7510 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:51:14.377144 7510 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:51:14.376974 7510 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:51:14.376987 7510 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:51:14.377218 7510 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0224 14:51:14.377234 7510 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:51:14.377250 7510 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:51:14.377243 7510 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:51:14.377475 7510 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:51:14.377526 7510 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:51:14.377552 7510 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:51:14.377579 7510 factory.go:656] Stopping watch factory\\\\nI0224 14:51:14.377595 7510 ovnkube.go:599] Stopped ovnkube\\\\nI0224 14:51:14.377638 7510 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:51:14.377656 7510 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:51:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:51:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.466845 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.484928 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.506706 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.530067 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.549418 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:19 crc kubenswrapper[4982]: I0224 14:51:19.565362 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:19Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:20 crc kubenswrapper[4982]: I0224 14:51:20.144846 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:20 crc kubenswrapper[4982]: I0224 14:51:20.144969 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:20 crc kubenswrapper[4982]: I0224 14:51:20.145013 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:20 crc kubenswrapper[4982]: I0224 14:51:20.145357 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:20 crc kubenswrapper[4982]: E0224 14:51:20.145455 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:20 crc kubenswrapper[4982]: E0224 14:51:20.145571 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:20 crc kubenswrapper[4982]: E0224 14:51:20.145613 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:20 crc kubenswrapper[4982]: E0224 14:51:20.145671 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.088123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.088190 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.088208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.088232 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.088278 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:22Z","lastTransitionTime":"2026-02-24T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.109116 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.114445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.114526 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.114545 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.114568 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.114584 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:22Z","lastTransitionTime":"2026-02-24T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.136407 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.140962 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.141026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.141052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.141083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.141105 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:22Z","lastTransitionTime":"2026-02-24T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.145306 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.145351 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.145442 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.145492 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.145710 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.146217 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.146381 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.146493 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.160956 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.165806 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.165860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.165883 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.165910 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.165933 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:22Z","lastTransitionTime":"2026-02-24T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.185221 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.189344 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.189394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.189410 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.189429 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:22 crc kubenswrapper[4982]: I0224 14:51:22.189445 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:22Z","lastTransitionTime":"2026-02-24T14:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.204744 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:22Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:22 crc kubenswrapper[4982]: E0224 14:51:22.205018 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:51:24 crc kubenswrapper[4982]: I0224 14:51:24.144650 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:24 crc kubenswrapper[4982]: I0224 14:51:24.144739 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:24 crc kubenswrapper[4982]: I0224 14:51:24.144753 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:24 crc kubenswrapper[4982]: I0224 14:51:24.144650 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:24 crc kubenswrapper[4982]: E0224 14:51:24.144900 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:24 crc kubenswrapper[4982]: E0224 14:51:24.145023 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:24 crc kubenswrapper[4982]: E0224 14:51:24.145143 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:24 crc kubenswrapper[4982]: E0224 14:51:24.145380 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:24 crc kubenswrapper[4982]: E0224 14:51:24.287490 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:26 crc kubenswrapper[4982]: I0224 14:51:26.145439 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:26 crc kubenswrapper[4982]: I0224 14:51:26.145563 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:26 crc kubenswrapper[4982]: E0224 14:51:26.145933 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:26 crc kubenswrapper[4982]: I0224 14:51:26.145611 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:26 crc kubenswrapper[4982]: I0224 14:51:26.145555 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:26 crc kubenswrapper[4982]: E0224 14:51:26.146118 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:26 crc kubenswrapper[4982]: E0224 14:51:26.146054 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:26 crc kubenswrapper[4982]: E0224 14:51:26.146237 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:28 crc kubenswrapper[4982]: I0224 14:51:28.144798 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:28 crc kubenswrapper[4982]: I0224 14:51:28.144905 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:28 crc kubenswrapper[4982]: E0224 14:51:28.144970 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:28 crc kubenswrapper[4982]: I0224 14:51:28.144992 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:28 crc kubenswrapper[4982]: I0224 14:51:28.145131 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:28 crc kubenswrapper[4982]: E0224 14:51:28.145288 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:28 crc kubenswrapper[4982]: E0224 14:51:28.145414 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:28 crc kubenswrapper[4982]: E0224 14:51:28.145570 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.146051 4982 scope.go:117] "RemoveContainer" containerID="73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3" Feb 24 14:51:29 crc kubenswrapper[4982]: E0224 14:51:29.147047 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.165165 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980e79a7-d090-44b6-9b7a-65275c2a6442\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ebbde60bef341e982e1e7fb5c237c5d5eedfdb72c4aa81627ecd582fc60ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4dce29a524baf6b13d3fb3167c75984a0ee416365faae953bf869a657a9061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0821facce4cb4bd700f535ccd91521a033cc2581ef0559a856bd016d0678e7ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.184894 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.201979 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.236644 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.258127 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.276316 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: E0224 14:51:29.289492 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.301567 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.346857 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.370412 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.386812 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.399487 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.411078 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.424647 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.444093 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:14Z\\\",\\\"message\\\":\\\"\\\\nI0224 14:51:14.376714 7510 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:51:14.376773 7510 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:51:14.377144 7510 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:51:14.376974 7510 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:51:14.376987 7510 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:51:14.377218 7510 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0224 14:51:14.377234 7510 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:51:14.377250 7510 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:51:14.377243 7510 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:51:14.377475 7510 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:51:14.377526 7510 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:51:14.377552 7510 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:51:14.377579 7510 factory.go:656] Stopping watch factory\\\\nI0224 14:51:14.377595 7510 ovnkube.go:599] Stopped ovnkube\\\\nI0224 14:51:14.377638 7510 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:51:14.377656 7510 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:51:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:51:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.459034 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.472547 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.485870 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.498099 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:29 crc kubenswrapper[4982]: I0224 14:51:29.516987 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:29Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:30 crc kubenswrapper[4982]: I0224 14:51:30.145342 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:30 crc kubenswrapper[4982]: I0224 14:51:30.145371 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:30 crc kubenswrapper[4982]: E0224 14:51:30.145551 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:30 crc kubenswrapper[4982]: I0224 14:51:30.145593 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:30 crc kubenswrapper[4982]: I0224 14:51:30.145604 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:30 crc kubenswrapper[4982]: E0224 14:51:30.145780 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:30 crc kubenswrapper[4982]: E0224 14:51:30.145859 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:30 crc kubenswrapper[4982]: E0224 14:51:30.145966 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.145562 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.145606 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.145645 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.145697 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.145764 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.145888 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.146012 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.146156 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.513547 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.513602 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.513619 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.513642 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.513660 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:32Z","lastTransitionTime":"2026-02-24T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.534993 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.540256 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.540299 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.540311 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.540328 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.540341 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:32Z","lastTransitionTime":"2026-02-24T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.559643 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.564223 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.564290 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.564313 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.564337 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.564361 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:32Z","lastTransitionTime":"2026-02-24T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.585344 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.590487 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.590616 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.590635 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.590660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.590679 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:32Z","lastTransitionTime":"2026-02-24T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.611964 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.617394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.617461 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.617479 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.617532 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:32 crc kubenswrapper[4982]: I0224 14:51:32.617576 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:32Z","lastTransitionTime":"2026-02-24T14:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.638188 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:32Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:32 crc kubenswrapper[4982]: E0224 14:51:32.638350 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:51:34 crc kubenswrapper[4982]: I0224 14:51:34.144560 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:34 crc kubenswrapper[4982]: I0224 14:51:34.144677 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:34 crc kubenswrapper[4982]: I0224 14:51:34.144560 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:34 crc kubenswrapper[4982]: E0224 14:51:34.144766 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:34 crc kubenswrapper[4982]: I0224 14:51:34.144588 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:34 crc kubenswrapper[4982]: E0224 14:51:34.144872 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:34 crc kubenswrapper[4982]: E0224 14:51:34.144964 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:34 crc kubenswrapper[4982]: E0224 14:51:34.145094 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:34 crc kubenswrapper[4982]: E0224 14:51:34.290934 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:36 crc kubenswrapper[4982]: I0224 14:51:36.144862 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:36 crc kubenswrapper[4982]: I0224 14:51:36.145008 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:36 crc kubenswrapper[4982]: E0224 14:51:36.145250 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:36 crc kubenswrapper[4982]: I0224 14:51:36.145316 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:36 crc kubenswrapper[4982]: I0224 14:51:36.145280 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:36 crc kubenswrapper[4982]: E0224 14:51:36.145428 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:36 crc kubenswrapper[4982]: E0224 14:51:36.145604 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:36 crc kubenswrapper[4982]: E0224 14:51:36.145756 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:38 crc kubenswrapper[4982]: I0224 14:51:38.145341 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:38 crc kubenswrapper[4982]: I0224 14:51:38.145424 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:38 crc kubenswrapper[4982]: I0224 14:51:38.145475 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:38 crc kubenswrapper[4982]: E0224 14:51:38.145648 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:38 crc kubenswrapper[4982]: I0224 14:51:38.145799 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:38 crc kubenswrapper[4982]: E0224 14:51:38.146029 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:38 crc kubenswrapper[4982]: E0224 14:51:38.146850 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:38 crc kubenswrapper[4982]: E0224 14:51:38.147034 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.168240 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd57107e97c4db418b8528a334b2c2422834f8f09100ffd07078578b98f042d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.201220 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91cccac8-913c-4bcf-a654-298dfce0a471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:14Z\\\",\\\"message\\\":\\\"\\\\nI0224 14:51:14.376714 7510 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 14:51:14.376773 7510 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 14:51:14.377144 7510 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 14:51:14.376974 7510 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 14:51:14.376987 7510 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 14:51:14.377218 7510 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0224 14:51:14.377234 7510 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 14:51:14.377250 7510 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 14:51:14.377243 7510 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 14:51:14.377475 7510 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 14:51:14.377526 7510 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 14:51:14.377552 7510 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 14:51:14.377579 7510 factory.go:656] Stopping watch factory\\\\nI0224 14:51:14.377595 7510 ovnkube.go:599] Stopped ovnkube\\\\nI0224 14:51:14.377638 7510 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 14:51:14.377656 7510 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 14:51:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:51:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjhfr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-96fkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.222230 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"465fb356-3c99-4881-81aa-0cad744fd120\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://530f33084daa645919df0fdc97d2522d344493ed8fdcddc5529587b2e08c97e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70eda6211744b4c253ccad8bc3099a0c90ba9f785ce9897a834c268325a8f2bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv4kz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z9jtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.241239 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81a3969e-b727-4271-902f-209711bc7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ea2e7a7d8d994413fadc3a9cfc6b7e04690b413d523994ddc582da19efc588f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a29e4a7cb90aa6714119fb5888b8078a616d4f2c774cb3da83972ebd8a9f083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.264570 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612fad56-511c-4961-aab1-974e6a1019ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:40Z\\\",\\\"message\\\":\\\" 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 14:49:40.105287 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nI0224 14:49:40.394117 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0224 14:49:40.399486 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0224 14:49:40.399554 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0224 14:49:40.399598 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0224 14:49:40.399610 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0224 14:49:40.409863 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0224 14:49:40.409889 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0224 14:49:40.409899 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0224 14:49:40.409903 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0224 14:49:40.409906 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0224 14:49:40.409909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0224 14:49:40.410123 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0224 14:49:40.414863 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-590062561/tls.crt::/tmp/serving-cert-590062561/tls.key\\\\\\\"\\\\nF0224 14:49:40.414876 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.286591 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: E0224 14:51:39.292086 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.317797 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc22bf4d0e23aed90fbc512dc5ffaf3d8203e866baed82a73ffda9c7aab542ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.336640 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hg2sm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d426fc2-19af-43bc-a39c-c63afb2d9909\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdaa1e14fddf9123ff3127cad0cc10d206968d14ba37d0e1158546bfa97f38d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbzb9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hg2sm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.372798 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f3eb0f-32c1-4f53-a39c-27558af735b1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a29c0675aa411bf6de42b14fcfd074c5096d64790a411c767bbef37162c8bd19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50bf4955e0189b4fd9f22bfc67d83a76e0b9041cdc4af7b454e49e19aed1435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eeaeea809cc0976410cfb1c92414901cdd9540e9a2fd23ff5aa87846f238700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://359b8b5bb21ddf675f292c48c05f7ae353b73cb7fc8f643809c33717bbaeda99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdd32aae86acf840f35be660f0b91f5b17bade12215f8776a62aeb1ef97901a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e0bbeb99a3251b1dfa1cb127285e7a86a5e94f1ba77b93c49dfa042022fa1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ad13755e191f84f03984ad2f729f1edd9ddb8bcd9e2c0be67cdfda4cb6881e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f21c234c7cbf1d06e3712f0a365461c1db4b4353b3b22077555a86e1d36138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.401033 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc64417d-91fc-4b1c-9d05-c37b6b50c6f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8f7a77dceb60a8cf7eceb9d81d9b1221ce0257792212577e4cb61cc4ad8a789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ecd9e0274dc3d8645693e3935c670c7bc36b4d1d79edb321a8b382c1c0c3248\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T14:49:31Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 14:49:01.797846 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 14:49:01.799848 1 observer_polling.go:159] Starting file observer\\\\nI0224 14:49:01.840363 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 14:49:01.843602 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 14:49:31.747863 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 14:49:31.748104 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7152d51f62491c8df4aa592647a70f420b019bb2510fb3a7b46e0ce104e27182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc59aa86bf70ac51693ecd46e14014a286fe27d6d4fed86a1fc671c5b45bbebf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.420338 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980e79a7-d090-44b6-9b7a-65275c2a6442\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:48:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32ebbde60bef341e982e1e7fb5c237c5d5eedfdb72c4aa81627ecd582fc60ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4dce29a524baf6b13d3fb3167c75984a0ee416365faae953bf869a657a9061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0821facce4cb4bd700f535ccd91521a033cc2581ef0559a856bd016d0678e7ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:49:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75be58151b22db7053e588fbdb795745bddeadc61c9cdd43fc2efbaf2237c784\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:49:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:49:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:48:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.440038 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.455094 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ccj66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8556181-42f0-45af-8922-fd147917bce5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa69cbf6e2c6ef97164aaecce2897355162bf078c2f8f8d517ce3250b0aa1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66lkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ccj66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.475387 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.498765 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://243d212d874554c5856b1b9dcb27c5df3a69f4f3b1dedfc1a2e43b4502974a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91801af90c3f1142de6ab267a024f2076ccd4b48015185d9d1192b12757cc1e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.517660 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf688571-4e47-42da-80b4-0d54580ce6c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce83339c0a6d64253ba552a86e86ab4152859afb19055f3d63bf1363e186de62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhcxb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b79sf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.542416 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lknrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42019c71-4e1e-4a98-aee6-91061deb320a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e46cd3657a49bcc2c6741cc83a05378123119b17f249452e8be9ad1e7701e882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:50:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c55f379b5fc2168f3e3663c31efad2139b919c99d369fc491f347c02593acc7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a4d39554cf4bbc79834182aba2c226947c5f8a07c43f82b7d6afadfcb56562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fca0c69caa7b2e03e7b73f8c1513934550d2a41781c5be14538550bb0edcb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a4bf0b6b7195124000e4ea54d2d8b9b543d171135fdc5c724d13747d0479482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69eaa17ce46f8d96be9f40e77ea8f0f2495f2c37c33d6592569d01aa1aef25a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2abf65bafca3c8ebafd8d38ced4ed9350e2fe328a2d3099e3d98caf59b82a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T14:50:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fhm4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lknrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.567699 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-jgtdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86687a8a-6996-44fa-a62e-b43266c31922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T14:51:08Z\\\",\\\"message\\\":\\\"2026-02-24T14:50:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4\\\\n2026-02-24T14:50:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_60983708-a867-4805-8d0f-b4a7781fa8a4 to /host/opt/cni/bin/\\\\n2026-02-24T14:50:23Z [verbose] multus-daemon started\\\\n2026-02-24T14:50:23Z [verbose] Readiness Indicator file check\\\\n2026-02-24T14:51:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T14:50:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T14:51:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jzhg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-jgtdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:39 crc kubenswrapper[4982]: I0224 14:51:39.585617 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99337e5a-7ecb-4ed1-8ec5-14979be84e68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T14:50:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxw68\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T14:50:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6gwqq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:39Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:40 crc kubenswrapper[4982]: I0224 14:51:40.144532 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:40 crc kubenswrapper[4982]: I0224 14:51:40.144579 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:40 crc kubenswrapper[4982]: I0224 14:51:40.144538 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:40 crc kubenswrapper[4982]: E0224 14:51:40.144735 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:40 crc kubenswrapper[4982]: I0224 14:51:40.144790 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:40 crc kubenswrapper[4982]: E0224 14:51:40.144914 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:40 crc kubenswrapper[4982]: E0224 14:51:40.145008 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:40 crc kubenswrapper[4982]: E0224 14:51:40.145092 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.144947 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.144990 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.145170 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.145249 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.145345 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.145387 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.145475 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.145773 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.654582 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.654661 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.654681 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.654707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.654728 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:42Z","lastTransitionTime":"2026-02-24T14:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.676749 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:42Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.681726 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.681775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.681794 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.681815 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.681832 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:42Z","lastTransitionTime":"2026-02-24T14:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.701686 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:42Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.706602 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.706636 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.706645 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.706660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.706669 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:42Z","lastTransitionTime":"2026-02-24T14:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.725384 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:42Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.734869 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.734931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.734951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.734973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.735023 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:42Z","lastTransitionTime":"2026-02-24T14:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.750809 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:42Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.755357 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.755614 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.755752 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.755906 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:42 crc kubenswrapper[4982]: I0224 14:51:42.756045 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:42Z","lastTransitionTime":"2026-02-24T14:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.776227 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T14:51:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f285ac22-b5ce-4ddc-b9c2-52ac2ce3645f\\\",\\\"systemUUID\\\":\\\"1f50fe44-226c-4567-9cfd-6e69cfb222c6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T14:51:42Z is after 2025-08-24T17:21:41Z" Feb 24 14:51:42 crc kubenswrapper[4982]: E0224 14:51:42.776474 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:51:43 crc kubenswrapper[4982]: I0224 14:51:43.146292 4982 scope.go:117] "RemoveContainer" containerID="73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3" Feb 24 14:51:43 crc kubenswrapper[4982]: E0224 14:51:43.146576 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-96fkj_openshift-ovn-kubernetes(91cccac8-913c-4bcf-a654-298dfce0a471)\"" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" Feb 24 14:51:44 crc kubenswrapper[4982]: I0224 14:51:44.144774 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:44 crc kubenswrapper[4982]: I0224 14:51:44.144857 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:44 crc kubenswrapper[4982]: I0224 14:51:44.144819 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:44 crc kubenswrapper[4982]: I0224 14:51:44.144787 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:44 crc kubenswrapper[4982]: E0224 14:51:44.144993 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:44 crc kubenswrapper[4982]: E0224 14:51:44.145097 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:44 crc kubenswrapper[4982]: E0224 14:51:44.145219 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:44 crc kubenswrapper[4982]: E0224 14:51:44.145386 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:44 crc kubenswrapper[4982]: E0224 14:51:44.294248 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:46 crc kubenswrapper[4982]: I0224 14:51:46.145341 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:46 crc kubenswrapper[4982]: I0224 14:51:46.145338 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:46 crc kubenswrapper[4982]: E0224 14:51:46.146031 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:46 crc kubenswrapper[4982]: I0224 14:51:46.145407 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:46 crc kubenswrapper[4982]: I0224 14:51:46.145376 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:46 crc kubenswrapper[4982]: E0224 14:51:46.146171 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:46 crc kubenswrapper[4982]: E0224 14:51:46.146265 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:46 crc kubenswrapper[4982]: E0224 14:51:46.146346 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:48 crc kubenswrapper[4982]: I0224 14:51:48.144866 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:48 crc kubenswrapper[4982]: I0224 14:51:48.144879 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:48 crc kubenswrapper[4982]: E0224 14:51:48.145942 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:48 crc kubenswrapper[4982]: I0224 14:51:48.144928 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:48 crc kubenswrapper[4982]: E0224 14:51:48.145820 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:48 crc kubenswrapper[4982]: I0224 14:51:48.145072 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:48 crc kubenswrapper[4982]: E0224 14:51:48.146174 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:48 crc kubenswrapper[4982]: E0224 14:51:48.146325 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.204022 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=80.203995262 podStartE2EDuration="1m20.203995262s" podCreationTimestamp="2026-02-24 14:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.178472304 +0000 UTC m=+170.797530837" watchObservedRunningTime="2026-02-24 14:51:49.203995262 +0000 UTC m=+170.823053785" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.220920 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=101.220889034 podStartE2EDuration="1m41.220889034s" podCreationTimestamp="2026-02-24 14:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.204150506 +0000 UTC m=+170.823209049" watchObservedRunningTime="2026-02-24 14:51:49.220889034 +0000 UTC m=+170.839947577" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.262213 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hg2sm" podStartSLOduration=136.262189304 podStartE2EDuration="2m16.262189304s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.261658511 +0000 UTC m=+170.880717054" watchObservedRunningTime="2026-02-24 14:51:49.262189304 +0000 UTC m=+170.881247837" Feb 24 14:51:49 crc kubenswrapper[4982]: E0224 14:51:49.295585 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.296271 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=68.296237066 podStartE2EDuration="1m8.296237066s" podCreationTimestamp="2026-02-24 14:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.295902917 +0000 UTC m=+170.914961470" watchObservedRunningTime="2026-02-24 14:51:49.296237066 +0000 UTC m=+170.915295599" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.340756 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=45.34073363 podStartE2EDuration="45.34073363s" podCreationTimestamp="2026-02-24 14:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.32199526 +0000 UTC m=+170.941053753" watchObservedRunningTime="2026-02-24 14:51:49.34073363 +0000 UTC m=+170.959792123" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.341083 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.34107806 podStartE2EDuration="38.34107806s" podCreationTimestamp="2026-02-24 14:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.340532675 +0000 UTC m=+170.959591208" watchObservedRunningTime="2026-02-24 14:51:49.34107806 +0000 UTC m=+170.960136553" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.372028 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ccj66" podStartSLOduration=136.372003909 podStartE2EDuration="2m16.372003909s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.3713051 +0000 UTC m=+170.990363673" watchObservedRunningTime="2026-02-24 14:51:49.372003909 +0000 UTC m=+170.991062402" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.433412 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podStartSLOduration=136.433383656 podStartE2EDuration="2m16.433383656s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.432449401 +0000 UTC m=+171.051507894" watchObservedRunningTime="2026-02-24 14:51:49.433383656 +0000 UTC m=+171.052442179" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.461989 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lknrx" podStartSLOduration=136.461950813 podStartE2EDuration="2m16.461950813s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.461668076 +0000 UTC m=+171.080726599" watchObservedRunningTime="2026-02-24 14:51:49.461950813 +0000 UTC m=+171.081009346" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.505415 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jgtdj" podStartSLOduration=136.50537397 podStartE2EDuration="2m16.50537397s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.487342178 +0000 UTC m=+171.106400711" watchObservedRunningTime="2026-02-24 14:51:49.50537397 +0000 UTC m=+171.124432503" Feb 24 14:51:49 crc kubenswrapper[4982]: I0224 14:51:49.596129 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z9jtq" podStartSLOduration=135.596101255 podStartE2EDuration="2m15.596101255s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:49.595739756 +0000 UTC m=+171.214798279" watchObservedRunningTime="2026-02-24 14:51:49.596101255 +0000 UTC m=+171.215159768" Feb 24 14:51:50 crc kubenswrapper[4982]: I0224 14:51:50.144830 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:50 crc kubenswrapper[4982]: I0224 14:51:50.144876 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:50 crc kubenswrapper[4982]: I0224 14:51:50.144878 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:50 crc kubenswrapper[4982]: E0224 14:51:50.145028 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:50 crc kubenswrapper[4982]: I0224 14:51:50.145068 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:50 crc kubenswrapper[4982]: E0224 14:51:50.145257 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:50 crc kubenswrapper[4982]: E0224 14:51:50.145491 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:50 crc kubenswrapper[4982]: E0224 14:51:50.145787 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:52 crc kubenswrapper[4982]: I0224 14:51:52.144975 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:52 crc kubenswrapper[4982]: I0224 14:51:52.145058 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:52 crc kubenswrapper[4982]: E0224 14:51:52.145197 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:52 crc kubenswrapper[4982]: I0224 14:51:52.145241 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:52 crc kubenswrapper[4982]: I0224 14:51:52.145287 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:52 crc kubenswrapper[4982]: E0224 14:51:52.145641 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:52 crc kubenswrapper[4982]: E0224 14:51:52.145746 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:52 crc kubenswrapper[4982]: E0224 14:51:52.145859 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.027781 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.027970 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.027994 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.028034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.028058 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T14:51:53Z","lastTransitionTime":"2026-02-24T14:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.078469 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg"] Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.079423 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.081988 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.082157 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.082290 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.082489 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.183963 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.192457 4982 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.238490 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff78feec-b083-4140-9231-3f07b9ed3a04-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.238664 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff78feec-b083-4140-9231-3f07b9ed3a04-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.238784 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff78feec-b083-4140-9231-3f07b9ed3a04-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.238838 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ff78feec-b083-4140-9231-3f07b9ed3a04-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.238881 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ff78feec-b083-4140-9231-3f07b9ed3a04-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.339939 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff78feec-b083-4140-9231-3f07b9ed3a04-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.340052 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff78feec-b083-4140-9231-3f07b9ed3a04-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.340178 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff78feec-b083-4140-9231-3f07b9ed3a04-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.340233 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ff78feec-b083-4140-9231-3f07b9ed3a04-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.340279 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ff78feec-b083-4140-9231-3f07b9ed3a04-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.340406 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ff78feec-b083-4140-9231-3f07b9ed3a04-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.340413 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ff78feec-b083-4140-9231-3f07b9ed3a04-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.341881 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff78feec-b083-4140-9231-3f07b9ed3a04-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.353902 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff78feec-b083-4140-9231-3f07b9ed3a04-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.371145 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff78feec-b083-4140-9231-3f07b9ed3a04-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrvvg\" (UID: \"ff78feec-b083-4140-9231-3f07b9ed3a04\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: I0224 14:51:53.402491 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" Feb 24 14:51:53 crc kubenswrapper[4982]: W0224 14:51:53.426796 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff78feec_b083_4140_9231_3f07b9ed3a04.slice/crio-8ed3d6a8a0d1749553381d98a85735fde296e2eebfa48f657a1e395f76e33808 WatchSource:0}: Error finding container 8ed3d6a8a0d1749553381d98a85735fde296e2eebfa48f657a1e395f76e33808: Status 404 returned error can't find the container with id 8ed3d6a8a0d1749553381d98a85735fde296e2eebfa48f657a1e395f76e33808 Feb 24 14:51:54 crc kubenswrapper[4982]: I0224 14:51:54.100550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" event={"ID":"ff78feec-b083-4140-9231-3f07b9ed3a04","Type":"ContainerStarted","Data":"ff953d440c993100b7d01c915f1a1ef7a2f97ea2a0fc612f7a831bec35772837"} Feb 24 14:51:54 crc kubenswrapper[4982]: I0224 14:51:54.100663 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" event={"ID":"ff78feec-b083-4140-9231-3f07b9ed3a04","Type":"ContainerStarted","Data":"8ed3d6a8a0d1749553381d98a85735fde296e2eebfa48f657a1e395f76e33808"} Feb 24 14:51:54 crc kubenswrapper[4982]: I0224 14:51:54.125587 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrvvg" podStartSLOduration=141.125555169 podStartE2EDuration="2m21.125555169s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:54.12517354 +0000 UTC m=+175.744232113" watchObservedRunningTime="2026-02-24 14:51:54.125555169 +0000 UTC m=+175.744613722" Feb 24 14:51:54 crc kubenswrapper[4982]: I0224 14:51:54.144550 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:54 crc kubenswrapper[4982]: I0224 14:51:54.144634 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:54 crc kubenswrapper[4982]: I0224 14:51:54.144545 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:54 crc kubenswrapper[4982]: E0224 14:51:54.144693 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:54 crc kubenswrapper[4982]: I0224 14:51:54.144644 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:54 crc kubenswrapper[4982]: E0224 14:51:54.144813 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:54 crc kubenswrapper[4982]: E0224 14:51:54.144926 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:54 crc kubenswrapper[4982]: E0224 14:51:54.144995 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:54 crc kubenswrapper[4982]: E0224 14:51:54.297767 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:51:55 crc kubenswrapper[4982]: I0224 14:51:55.106317 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/1.log" Feb 24 14:51:55 crc kubenswrapper[4982]: I0224 14:51:55.107257 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/0.log" Feb 24 14:51:55 crc kubenswrapper[4982]: I0224 14:51:55.107311 4982 generic.go:334] "Generic (PLEG): container finished" podID="86687a8a-6996-44fa-a62e-b43266c31922" containerID="1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae" exitCode=1 Feb 24 14:51:55 crc kubenswrapper[4982]: I0224 14:51:55.107347 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgtdj" event={"ID":"86687a8a-6996-44fa-a62e-b43266c31922","Type":"ContainerDied","Data":"1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae"} Feb 24 14:51:55 crc kubenswrapper[4982]: I0224 14:51:55.107392 4982 scope.go:117] "RemoveContainer" containerID="daf68344eb9f1638497f946ad8f22dbf520ca98b34251cb9bd744b296c11c6fa" Feb 24 14:51:55 crc kubenswrapper[4982]: I0224 14:51:55.107958 4982 scope.go:117] "RemoveContainer" containerID="1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae" Feb 24 14:51:55 crc kubenswrapper[4982]: E0224 14:51:55.108181 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-jgtdj_openshift-multus(86687a8a-6996-44fa-a62e-b43266c31922)\"" pod="openshift-multus/multus-jgtdj" podUID="86687a8a-6996-44fa-a62e-b43266c31922" Feb 24 14:51:56 crc kubenswrapper[4982]: I0224 14:51:56.113149 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/1.log" Feb 24 14:51:56 crc kubenswrapper[4982]: I0224 14:51:56.144786 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:56 crc kubenswrapper[4982]: I0224 14:51:56.144821 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:56 crc kubenswrapper[4982]: I0224 14:51:56.144897 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:56 crc kubenswrapper[4982]: E0224 14:51:56.144951 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:56 crc kubenswrapper[4982]: I0224 14:51:56.144993 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:56 crc kubenswrapper[4982]: E0224 14:51:56.145170 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:56 crc kubenswrapper[4982]: E0224 14:51:56.145187 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:56 crc kubenswrapper[4982]: E0224 14:51:56.145258 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:58 crc kubenswrapper[4982]: I0224 14:51:58.145377 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:51:58 crc kubenswrapper[4982]: I0224 14:51:58.145392 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:51:58 crc kubenswrapper[4982]: I0224 14:51:58.145435 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:51:58 crc kubenswrapper[4982]: I0224 14:51:58.145549 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:58 crc kubenswrapper[4982]: E0224 14:51:58.146179 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:51:58 crc kubenswrapper[4982]: E0224 14:51:58.146568 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:51:58 crc kubenswrapper[4982]: I0224 14:51:58.146777 4982 scope.go:117] "RemoveContainer" containerID="73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3" Feb 24 14:51:58 crc kubenswrapper[4982]: E0224 14:51:58.146813 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:58 crc kubenswrapper[4982]: E0224 14:51:58.147087 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:51:59 crc kubenswrapper[4982]: I0224 14:51:59.127154 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6gwqq"] Feb 24 14:51:59 crc kubenswrapper[4982]: I0224 14:51:59.145295 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/3.log" Feb 24 14:51:59 crc kubenswrapper[4982]: I0224 14:51:59.151621 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:51:59 crc kubenswrapper[4982]: E0224 14:51:59.151796 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:51:59 crc kubenswrapper[4982]: I0224 14:51:59.155989 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerStarted","Data":"6ca186df7bd9edf5224331d00a096936d7109a834bb502205b6c3fdf09f5c8ec"} Feb 24 14:51:59 crc kubenswrapper[4982]: I0224 14:51:59.156762 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:51:59 crc kubenswrapper[4982]: I0224 14:51:59.198163 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podStartSLOduration=146.198132462 podStartE2EDuration="2m26.198132462s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:51:59.197792412 +0000 UTC m=+180.816850955" watchObservedRunningTime="2026-02-24 14:51:59.198132462 +0000 UTC m=+180.817191005" Feb 24 14:51:59 crc kubenswrapper[4982]: E0224 14:51:59.298546 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:52:00 crc kubenswrapper[4982]: I0224 14:52:00.144900 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:00 crc kubenswrapper[4982]: I0224 14:52:00.144904 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:00 crc kubenswrapper[4982]: E0224 14:52:00.145434 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:52:00 crc kubenswrapper[4982]: I0224 14:52:00.144932 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:00 crc kubenswrapper[4982]: E0224 14:52:00.145578 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:52:00 crc kubenswrapper[4982]: E0224 14:52:00.145662 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:52:01 crc kubenswrapper[4982]: I0224 14:52:01.145023 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:01 crc kubenswrapper[4982]: E0224 14:52:01.146378 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:52:02 crc kubenswrapper[4982]: I0224 14:52:02.145033 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:02 crc kubenswrapper[4982]: I0224 14:52:02.145138 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:02 crc kubenswrapper[4982]: I0224 14:52:02.145047 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:02 crc kubenswrapper[4982]: E0224 14:52:02.145209 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:52:02 crc kubenswrapper[4982]: E0224 14:52:02.145314 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:52:02 crc kubenswrapper[4982]: E0224 14:52:02.145415 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:52:03 crc kubenswrapper[4982]: I0224 14:52:03.145227 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:03 crc kubenswrapper[4982]: E0224 14:52:03.145359 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:52:04 crc kubenswrapper[4982]: I0224 14:52:04.145343 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:04 crc kubenswrapper[4982]: I0224 14:52:04.145436 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:04 crc kubenswrapper[4982]: I0224 14:52:04.145436 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:04 crc kubenswrapper[4982]: E0224 14:52:04.145718 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:52:04 crc kubenswrapper[4982]: E0224 14:52:04.145836 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:52:04 crc kubenswrapper[4982]: E0224 14:52:04.145967 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:52:04 crc kubenswrapper[4982]: E0224 14:52:04.301283 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:52:05 crc kubenswrapper[4982]: I0224 14:52:05.145653 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:05 crc kubenswrapper[4982]: E0224 14:52:05.146657 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:52:06 crc kubenswrapper[4982]: I0224 14:52:06.145053 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:06 crc kubenswrapper[4982]: I0224 14:52:06.145127 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:06 crc kubenswrapper[4982]: I0224 14:52:06.145127 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:06 crc kubenswrapper[4982]: E0224 14:52:06.146495 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:52:06 crc kubenswrapper[4982]: E0224 14:52:06.146701 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:52:06 crc kubenswrapper[4982]: E0224 14:52:06.146978 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:52:07 crc kubenswrapper[4982]: I0224 14:52:07.144918 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:07 crc kubenswrapper[4982]: E0224 14:52:07.146278 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:52:08 crc kubenswrapper[4982]: I0224 14:52:08.144697 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:08 crc kubenswrapper[4982]: E0224 14:52:08.144908 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:52:08 crc kubenswrapper[4982]: I0224 14:52:08.145241 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:08 crc kubenswrapper[4982]: E0224 14:52:08.145371 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:52:08 crc kubenswrapper[4982]: I0224 14:52:08.145686 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:08 crc kubenswrapper[4982]: E0224 14:52:08.145835 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:52:08 crc kubenswrapper[4982]: I0224 14:52:08.517418 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 14:52:09 crc kubenswrapper[4982]: I0224 14:52:09.144854 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:09 crc kubenswrapper[4982]: E0224 14:52:09.146673 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:52:09 crc kubenswrapper[4982]: E0224 14:52:09.302186 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 14:52:10 crc kubenswrapper[4982]: I0224 14:52:10.144697 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:10 crc kubenswrapper[4982]: I0224 14:52:10.144772 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:10 crc kubenswrapper[4982]: E0224 14:52:10.144865 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:52:10 crc kubenswrapper[4982]: E0224 14:52:10.145043 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:52:10 crc kubenswrapper[4982]: I0224 14:52:10.145379 4982 scope.go:117] "RemoveContainer" containerID="1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae" Feb 24 14:52:10 crc kubenswrapper[4982]: I0224 14:52:10.145597 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:10 crc kubenswrapper[4982]: E0224 14:52:10.145714 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:52:11 crc kubenswrapper[4982]: I0224 14:52:11.145015 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:11 crc kubenswrapper[4982]: E0224 14:52:11.145611 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:52:11 crc kubenswrapper[4982]: I0224 14:52:11.203107 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/1.log" Feb 24 14:52:11 crc kubenswrapper[4982]: I0224 14:52:11.203191 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgtdj" event={"ID":"86687a8a-6996-44fa-a62e-b43266c31922","Type":"ContainerStarted","Data":"6433821ead0065df5901d646c534d5091e0f54c83e36192db68997a515b90593"} Feb 24 14:52:12 crc kubenswrapper[4982]: I0224 14:52:12.144425 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:12 crc kubenswrapper[4982]: E0224 14:52:12.144685 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:52:12 crc kubenswrapper[4982]: I0224 14:52:12.144473 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:12 crc kubenswrapper[4982]: E0224 14:52:12.144928 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:52:12 crc kubenswrapper[4982]: I0224 14:52:12.144451 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:12 crc kubenswrapper[4982]: E0224 14:52:12.145146 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:52:13 crc kubenswrapper[4982]: I0224 14:52:13.145361 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:13 crc kubenswrapper[4982]: E0224 14:52:13.145564 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6gwqq" podUID="99337e5a-7ecb-4ed1-8ec5-14979be84e68" Feb 24 14:52:14 crc kubenswrapper[4982]: I0224 14:52:14.145550 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:14 crc kubenswrapper[4982]: I0224 14:52:14.145606 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:14 crc kubenswrapper[4982]: I0224 14:52:14.145568 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:14 crc kubenswrapper[4982]: E0224 14:52:14.145804 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:52:14 crc kubenswrapper[4982]: E0224 14:52:14.145936 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 14:52:14 crc kubenswrapper[4982]: E0224 14:52:14.146103 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 14:52:15 crc kubenswrapper[4982]: I0224 14:52:15.145023 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:15 crc kubenswrapper[4982]: I0224 14:52:15.149180 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 14:52:15 crc kubenswrapper[4982]: I0224 14:52:15.150835 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.130971 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131213 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:54:18.131174216 +0000 UTC m=+319.750232739 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.131362 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.131424 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.131490 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.131588 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131700 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131747 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131747 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131772 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131775 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131763 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131876 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131896 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131844 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:54:18.131817544 +0000 UTC m=+319.750876077 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.131993 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 14:54:18.131962488 +0000 UTC m=+319.751021071 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.132031 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 14:54:18.132011469 +0000 UTC m=+319.751070022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 14:52:16 crc kubenswrapper[4982]: E0224 14:52:16.132074 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 14:54:18.1320583 +0000 UTC m=+319.751116923 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.145072 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.145117 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.145069 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.147692 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.148218 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.148464 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.148717 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.233150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.241491 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99337e5a-7ecb-4ed1-8ec5-14979be84e68-metrics-certs\") pod \"network-metrics-daemon-6gwqq\" (UID: \"99337e5a-7ecb-4ed1-8ec5-14979be84e68\") " pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.361572 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6gwqq" Feb 24 14:52:16 crc kubenswrapper[4982]: I0224 14:52:16.610619 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6gwqq"] Feb 24 14:52:16 crc kubenswrapper[4982]: W0224 14:52:16.622492 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99337e5a_7ecb_4ed1_8ec5_14979be84e68.slice/crio-4f81382ef53967dd46119f0f1bf7081f5a3841b2b8c7c9404fa1e343317c22ad WatchSource:0}: Error finding container 4f81382ef53967dd46119f0f1bf7081f5a3841b2b8c7c9404fa1e343317c22ad: Status 404 returned error can't find the container with id 4f81382ef53967dd46119f0f1bf7081f5a3841b2b8c7c9404fa1e343317c22ad Feb 24 14:52:17 crc kubenswrapper[4982]: I0224 14:52:17.227608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" event={"ID":"99337e5a-7ecb-4ed1-8ec5-14979be84e68","Type":"ContainerStarted","Data":"0fc9568b61fd733821f6e54c7abe0a9562152bff74b943c17b0b16080f495359"} Feb 24 14:52:17 crc kubenswrapper[4982]: I0224 14:52:17.227691 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" event={"ID":"99337e5a-7ecb-4ed1-8ec5-14979be84e68","Type":"ContainerStarted","Data":"8dbd31eb694a8450c87367c3b69c2d9979ca69bcf7825e0cb935bdf719d29435"} Feb 24 14:52:17 crc kubenswrapper[4982]: I0224 14:52:17.227719 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6gwqq" event={"ID":"99337e5a-7ecb-4ed1-8ec5-14979be84e68","Type":"ContainerStarted","Data":"4f81382ef53967dd46119f0f1bf7081f5a3841b2b8c7c9404fa1e343317c22ad"} Feb 24 14:52:17 crc kubenswrapper[4982]: I0224 14:52:17.256111 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6gwqq" podStartSLOduration=164.256082916 podStartE2EDuration="2m44.256082916s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:17.256037385 +0000 UTC m=+198.875095928" watchObservedRunningTime="2026-02-24 14:52:17.256082916 +0000 UTC m=+198.875141459" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.651314 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.723261 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l5b9h"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.724162 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.727975 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.728777 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.735349 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.736278 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.737985 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.738982 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.739662 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.745408 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.747225 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.747472 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.748976 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5sdrm"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.749904 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.752851 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.753399 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.754007 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.754336 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.754929 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.755151 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.755322 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.755324 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.755678 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.755693 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.755852 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.768634 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.769338 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.792445 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.793591 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.793680 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.793815 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bpq5w"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.794048 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.794051 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.794583 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq62b"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.794829 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.794924 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.795202 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.795394 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-nsv6c"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.795581 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.795803 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.795580 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.798068 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f5kdd"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.798741 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.804700 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-htd7x"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.805523 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-htd7x" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.806292 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5kccw"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.806621 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.806675 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.806838 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.806903 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.808751 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9wbq"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.808857 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.809325 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vkq7q"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.809582 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.810033 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.810518 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.813140 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qdr8x"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817809 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815320 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.818069 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.814590 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815353 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.818762 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815404 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815598 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815706 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.819316 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.818807 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.819860 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.819351 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815800 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815814 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815851 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815876 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815917 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.815957 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.820652 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816001 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816034 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816066 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816180 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816181 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816251 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816470 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816603 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.816717 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817529 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.821360 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.821393 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.821431 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817601 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817639 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.821651 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817676 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817710 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817719 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817784 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.817942 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.819514 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.819733 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.820103 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.821723 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.823741 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.823917 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824041 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824421 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824606 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824653 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-client-ca\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9wc5\" (UniqueName: \"kubernetes.io/projected/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-kube-api-access-l9wc5\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824710 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-client-ca\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824743 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-serving-cert\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824764 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgfm8\" (UniqueName: \"kubernetes.io/projected/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-kube-api-access-kgfm8\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824799 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-config\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824819 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-config\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824843 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b93651-de65-42d6-96b0-560298df3222-serving-cert\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824878 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-images\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824900 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gh6\" (UniqueName: \"kubernetes.io/projected/37b93651-de65-42d6-96b0-560298df3222-kube-api-access-v5gh6\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824927 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-config\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.824950 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.829606 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.829839 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.830061 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.830247 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.830370 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.830481 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.832989 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.833313 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.833368 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.833939 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.834074 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.834104 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.834338 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.834432 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.834543 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.834635 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.834717 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.834809 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.835099 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.853993 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tsz6d"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.855011 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.855231 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.856549 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.856668 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.857207 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.857430 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.858252 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.858700 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.861978 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.862339 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.863361 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.880169 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.883169 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.883747 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.884031 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.884214 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.884260 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.884417 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.889119 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.889202 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.889544 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.891286 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.894535 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.895022 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-clvbx"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.895132 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.895485 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9x558"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.895616 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.895878 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.896120 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.896378 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.896560 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.896788 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jhn22"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.897024 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.897037 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.897263 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.897269 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.897543 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.899561 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.899690 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.900317 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.900325 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.900857 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.900976 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.901382 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.901466 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.901703 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.901935 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.902872 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.903065 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.903683 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.922105 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.925907 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq62b"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.926383 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-config\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.926415 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-config\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.926447 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9102e968-b893-4021-af5f-41b653664820-config\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.926467 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.926624 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.928123 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-config\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.928265 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-config\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.935590 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zv2x7"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.936109 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b93651-de65-42d6-96b0-560298df3222-serving-cert\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.936445 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532412-hd7v7"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.937108 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.937373 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.938569 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5jz8"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939638 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b93651-de65-42d6-96b0-560298df3222-serving-cert\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939736 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-encryption-config\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939762 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939777 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939739 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939803 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cj4v\" (UniqueName: \"kubernetes.io/projected/b36f6a63-d48a-4adb-bdb0-3b63c7679981-kube-api-access-4cj4v\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939849 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-serving-cert\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939872 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f256a43e-ecd0-4cf0-8d2c-5e662a455533-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939904 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f256a43e-ecd0-4cf0-8d2c-5e662a455533-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939924 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w594x\" (UniqueName: \"kubernetes.io/projected/f256a43e-ecd0-4cf0-8d2c-5e662a455533-kube-api-access-w594x\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939944 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-policies\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.939990 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-images\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940142 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-htd7x"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940146 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5gh6\" (UniqueName: \"kubernetes.io/projected/37b93651-de65-42d6-96b0-560298df3222-kube-api-access-v5gh6\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940286 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-audit-policies\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940618 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940671 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940730 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-config\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940756 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-dir\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940792 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940848 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941382 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-serving-cert\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941452 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-etcd-client\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941473 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f53d5626-04b0-455d-a4bc-96207b51b221-audit-dir\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941534 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9xxp\" (UniqueName: \"kubernetes.io/projected/9102e968-b893-4021-af5f-41b653664820-kube-api-access-d9xxp\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941558 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941581 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-config\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941592 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941674 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941703 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdxst\" (UniqueName: \"kubernetes.io/projected/f53d5626-04b0-455d-a4bc-96207b51b221-kube-api-access-pdxst\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941723 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941748 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941771 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-client-ca\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941811 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941853 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9wc5\" (UniqueName: \"kubernetes.io/projected/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-kube-api-access-l9wc5\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941875 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941895 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941934 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-client-ca\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941963 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-serving-cert\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.941988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9102e968-b893-4021-af5f-41b653664820-auth-proxy-config\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.942012 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9102e968-b893-4021-af5f-41b653664820-machine-approver-tls\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.942039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-srv-cert\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.942063 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhjv\" (UniqueName: \"kubernetes.io/projected/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-kube-api-access-nvhjv\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.942090 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.942116 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgfm8\" (UniqueName: \"kubernetes.io/projected/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-kube-api-access-kgfm8\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.942142 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275r7\" (UniqueName: \"kubernetes.io/projected/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-kube-api-access-275r7\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.942151 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.942646 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-client-ca\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.943656 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bpq5w"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.940771 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-images\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.946254 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5sdrm"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.950362 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nsv6c"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.952835 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ndstv"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.954274 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-client-ca\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.955095 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-serving-cert\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.956912 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.957652 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.958913 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.963533 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.964093 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.965282 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.966772 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.968029 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qdr8x"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.970267 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.970320 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9wbq"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.973737 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f5kdd"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.975282 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.976720 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jhn22"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.978513 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.979304 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5kccw"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.979950 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.981417 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.983790 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vkq7q"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.986433 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.986767 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.987165 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tsz6d"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.988410 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zv2x7"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.992103 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5jz8"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.993553 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nktkq"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.994387 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.994923 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.997002 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.998375 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb"] Feb 24 14:52:23 crc kubenswrapper[4982]: I0224 14:52:23.999686 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l5b9h"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.000978 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.002209 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.003955 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.005176 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532412-hd7v7"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.006329 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.006904 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.008025 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.009159 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-clvbx"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.010369 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.011549 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.012770 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ndstv"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.013799 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vfkh9"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.015044 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ffjpg"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.015172 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vfkh9" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.016144 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vfkh9"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.016218 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.017241 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ffjpg"] Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.027081 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.042771 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.042826 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.042881 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9102e968-b893-4021-af5f-41b653664820-auth-proxy-config\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.042909 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9102e968-b893-4021-af5f-41b653664820-machine-approver-tls\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.043079 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvhjv\" (UniqueName: \"kubernetes.io/projected/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-kube-api-access-nvhjv\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.043102 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-srv-cert\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.043128 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.043166 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275r7\" (UniqueName: \"kubernetes.io/projected/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-kube-api-access-275r7\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.043750 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9102e968-b893-4021-af5f-41b653664820-auth-proxy-config\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.043979 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044145 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9102e968-b893-4021-af5f-41b653664820-config\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044211 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044335 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-encryption-config\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044374 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cj4v\" (UniqueName: \"kubernetes.io/projected/b36f6a63-d48a-4adb-bdb0-3b63c7679981-kube-api-access-4cj4v\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044430 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-serving-cert\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044453 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f256a43e-ecd0-4cf0-8d2c-5e662a455533-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044477 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f256a43e-ecd0-4cf0-8d2c-5e662a455533-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w594x\" (UniqueName: \"kubernetes.io/projected/f256a43e-ecd0-4cf0-8d2c-5e662a455533-kube-api-access-w594x\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044546 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-policies\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044575 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-audit-policies\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044597 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044618 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044640 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-dir\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044657 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9102e968-b893-4021-af5f-41b653664820-config\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044662 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044798 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044834 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-serving-cert\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044883 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-etcd-client\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044909 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f53d5626-04b0-455d-a4bc-96207b51b221-audit-dir\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.044959 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045001 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9xxp\" (UniqueName: \"kubernetes.io/projected/9102e968-b893-4021-af5f-41b653664820-kube-api-access-d9xxp\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045045 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045065 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045146 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f53d5626-04b0-455d-a4bc-96207b51b221-audit-dir\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045151 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdxst\" (UniqueName: \"kubernetes.io/projected/f53d5626-04b0-455d-a4bc-96207b51b221-kube-api-access-pdxst\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045248 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045297 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045555 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.045732 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.047463 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-policies\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.047930 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9102e968-b893-4021-af5f-41b653664820-machine-approver-tls\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.048136 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f256a43e-ecd0-4cf0-8d2c-5e662a455533-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.048362 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.048047 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-dir\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.049671 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.049687 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-encryption-config\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.050281 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.050592 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.050940 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.051447 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.051909 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-serving-cert\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.052091 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.052440 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.052539 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.052547 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-serving-cert\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.052695 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f256a43e-ecd0-4cf0-8d2c-5e662a455533-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.052991 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.058203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f53d5626-04b0-455d-a4bc-96207b51b221-etcd-client\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.067716 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.086730 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.088601 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-audit-policies\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.106409 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.113644 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.126178 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.128408 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f53d5626-04b0-455d-a4bc-96207b51b221-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.147114 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.173686 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.187362 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.207215 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.217723 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-srv-cert\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.228538 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.247435 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.266826 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.277716 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.307820 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.327297 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.347128 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.366540 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.387069 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.407317 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.427321 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.447173 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.466916 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.486895 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.508871 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.527123 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.546356 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.566122 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.587703 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.606331 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.626930 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.646001 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.667032 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.687812 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.707950 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.727003 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.748455 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.787171 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.806986 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.827192 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.847162 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.868118 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.887359 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.905122 4982 request.go:700] Waited for 1.008213928s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.907576 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.927861 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.947168 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.967676 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 14:52:24 crc kubenswrapper[4982]: I0224 14:52:24.987415 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.007312 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.027587 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.047867 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.069448 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.087546 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.107449 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.127339 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.147291 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.167740 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.188734 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.209196 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.226775 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.246482 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.269767 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.287313 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.307278 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.326794 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.348019 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.367851 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.387834 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.407609 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.428153 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.447433 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.481309 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.486813 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.508865 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.527244 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.548648 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.568205 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.589074 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.633548 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5gh6\" (UniqueName: \"kubernetes.io/projected/37b93651-de65-42d6-96b0-560298df3222-kube-api-access-v5gh6\") pod \"route-controller-manager-6576b87f9c-rpsqz\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.649284 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9wc5\" (UniqueName: \"kubernetes.io/projected/1b0d00bf-0cb6-4fa2-9561-edafa4a10082-kube-api-access-l9wc5\") pod \"machine-api-operator-5694c8668f-5sdrm\" (UID: \"1b0d00bf-0cb6-4fa2-9561-edafa4a10082\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.667234 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.675011 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgfm8\" (UniqueName: \"kubernetes.io/projected/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-kube-api-access-kgfm8\") pod \"controller-manager-879f6c89f-l5b9h\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.686949 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.706880 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.726769 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.747807 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.767620 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.787399 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.807641 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.829105 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.847664 4982 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.847835 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.867349 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.887781 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.906888 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.915928 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.916565 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.925604 4982 request.go:700] Waited for 1.8821638s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.957420 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvhjv\" (UniqueName: \"kubernetes.io/projected/91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3-kube-api-access-nvhjv\") pod \"olm-operator-6b444d44fb-64kvf\" (UID: \"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:25 crc kubenswrapper[4982]: I0224 14:52:25.982531 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275r7\" (UniqueName: \"kubernetes.io/projected/7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6-kube-api-access-275r7\") pod \"openshift-config-operator-7777fb866f-5kccw\" (UID: \"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.000022 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdxst\" (UniqueName: \"kubernetes.io/projected/f53d5626-04b0-455d-a4bc-96207b51b221-kube-api-access-pdxst\") pod \"apiserver-7bbb656c7d-6hrmx\" (UID: \"f53d5626-04b0-455d-a4bc-96207b51b221\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.020649 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cj4v\" (UniqueName: \"kubernetes.io/projected/b36f6a63-d48a-4adb-bdb0-3b63c7679981-kube-api-access-4cj4v\") pod \"oauth-openshift-558db77b4-f5kdd\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.041132 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9xxp\" (UniqueName: \"kubernetes.io/projected/9102e968-b893-4021-af5f-41b653664820-kube-api-access-d9xxp\") pod \"machine-approver-56656f9798-rj9bm\" (UID: \"9102e968-b893-4021-af5f-41b653664820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.050790 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w594x\" (UniqueName: \"kubernetes.io/projected/f256a43e-ecd0-4cf0-8d2c-5e662a455533-kube-api-access-w594x\") pod \"openshift-controller-manager-operator-756b6f6bc6-lccsh\" (UID: \"f256a43e-ecd0-4cf0-8d2c-5e662a455533\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072600 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-proxy-tls\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072684 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072725 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-node-pullsecrets\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072753 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194db7e1-3e0c-42ca-95db-97ceb1c43433-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072774 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-image-import-ca\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072795 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-audit-dir\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072860 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-trusted-ca-bundle\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072887 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-service-ca-bundle\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072912 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217b0f77-c367-4c25-9965-051d89a335e1-config\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072945 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49787c69-3eda-42e4-92d7-9770d605b4e7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072970 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.072993 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-config\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073017 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s92k8\" (UniqueName: \"kubernetes.io/projected/49787c69-3eda-42e4-92d7-9770d605b4e7-kube-api-access-s92k8\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073063 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-audit\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073096 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lt5dx\" (UID: \"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073124 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-etcd-client\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073152 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-bound-sa-token\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073184 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf5nh\" (UniqueName: \"kubernetes.io/projected/e0fc969b-5b42-4649-9109-d049431cae47-kube-api-access-qf5nh\") pod \"downloads-7954f5f757-htd7x\" (UID: \"e0fc969b-5b42-4649-9109-d049431cae47\") " pod="openshift-console/downloads-7954f5f757-htd7x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073214 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-config\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073239 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-serving-cert\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073268 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194db7e1-3e0c-42ca-95db-97ceb1c43433-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073298 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49787c69-3eda-42e4-92d7-9770d605b4e7-metrics-tls\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073357 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073384 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx8gf\" (UniqueName: \"kubernetes.io/projected/194db7e1-3e0c-42ca-95db-97ceb1c43433-kube-api-access-zx8gf\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073409 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-tls\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073453 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dx44\" (UniqueName: \"kubernetes.io/projected/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-kube-api-access-7dx44\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.073479 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f31bded6-f3d5-42b4-b479-3c01ce30e73a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074364 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074419 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49787c69-3eda-42e4-92d7-9770d605b4e7-trusted-ca\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074448 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074473 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6x5f\" (UniqueName: \"kubernetes.io/projected/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-kube-api-access-g6x5f\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074520 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-serving-cert\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.074552 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:26.574528318 +0000 UTC m=+208.193586821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074611 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-trusted-ca\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074647 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f31bded6-f3d5-42b4-b479-3c01ce30e73a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074671 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh5gd\" (UniqueName: \"kubernetes.io/projected/1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd-kube-api-access-vh5gd\") pod \"cluster-samples-operator-665b6dd947-lt5dx\" (UID: \"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074692 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-oauth-serving-cert\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074716 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259e2d88-6dda-4a11-b71b-da8eb015e022-serving-cert\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074782 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrpx\" (UniqueName: \"kubernetes.io/projected/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-kube-api-access-pqrpx\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074817 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-config\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074841 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-oauth-config\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074865 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2kxp\" (UniqueName: \"kubernetes.io/projected/259e2d88-6dda-4a11-b71b-da8eb015e022-kube-api-access-x2kxp\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074887 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41974777-d629-460d-b0b1-bdd82afb34d4-metrics-tls\") pod \"dns-operator-744455d44c-vkq7q\" (UID: \"41974777-d629-460d-b0b1-bdd82afb34d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074911 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217b0f77-c367-4c25-9965-051d89a335e1-serving-cert\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074934 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k99ss\" (UniqueName: \"kubernetes.io/projected/217b0f77-c367-4c25-9965-051d89a335e1-kube-api-access-k99ss\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074968 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-service-ca\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.074995 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjgp\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-kube-api-access-cxjgp\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.075022 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6mv\" (UniqueName: \"kubernetes.io/projected/41974777-d629-460d-b0b1-bdd82afb34d4-kube-api-access-mh6mv\") pod \"dns-operator-744455d44c-vkq7q\" (UID: \"41974777-d629-460d-b0b1-bdd82afb34d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.075047 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdbbx\" (UniqueName: \"kubernetes.io/projected/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-kube-api-access-vdbbx\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.075103 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-certificates\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.075139 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-encryption-config\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.076138 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.076222 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217b0f77-c367-4c25-9965-051d89a335e1-trusted-ca\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.089943 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.102355 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.138676 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l5b9h"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.144568 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" Feb 24 14:52:26 crc kubenswrapper[4982]: W0224 14:52:26.147901 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9d107c2_6c77_48a2_b2f6_328fb0d83afc.slice/crio-2aa4ce3b74868284a8fd1f9da5f7df01cc1852314306dfa8b35c3c1e36f73896 WatchSource:0}: Error finding container 2aa4ce3b74868284a8fd1f9da5f7df01cc1852314306dfa8b35c3c1e36f73896: Status 404 returned error can't find the container with id 2aa4ce3b74868284a8fd1f9da5f7df01cc1852314306dfa8b35c3c1e36f73896 Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177072 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.177284 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:26.677249473 +0000 UTC m=+208.296307986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177362 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177399 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08265db9-8c1f-47c9-b812-324889b64e93-config\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177418 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08265db9-8c1f-47c9-b812-324889b64e93-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177439 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22ced552-66a9-4936-8c25-3e3e8734de79-webhook-cert\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177462 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4367d9de-2fce-45c8-b354-fad4122e7eef-serving-cert\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177680 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-node-pullsecrets\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177753 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-metrics-certs\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177780 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-node-pullsecrets\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177783 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-image-import-ca\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177848 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-audit-dir\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177897 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194db7e1-3e0c-42ca-95db-97ceb1c43433-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177922 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08265db9-8c1f-47c9-b812-324889b64e93-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177928 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-audit-dir\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.177948 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e490bbd4-3eec-4547-9cab-b43ea88e0377-proxy-tls\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178008 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e490bbd4-3eec-4547-9cab-b43ea88e0377-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178056 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-service-ca-bundle\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178075 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217b0f77-c367-4c25-9965-051d89a335e1-config\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178426 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkg6v\" (UniqueName: \"kubernetes.io/projected/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-kube-api-access-kkg6v\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178458 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8dd5c785-b167-4b52-8c16-4eea0fcb5685-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6dkb\" (UID: \"8dd5c785-b167-4b52-8c16-4eea0fcb5685\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178519 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93fe75c8-ce81-4489-a557-db6b117d6079-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178541 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178567 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2b79e66-39f3-40e8-ad1d-cdd963a10983-service-ca-bundle\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178587 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s92k8\" (UniqueName: \"kubernetes.io/projected/49787c69-3eda-42e4-92d7-9770d605b4e7-kube-api-access-s92k8\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178607 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-ca\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178632 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178659 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lt5dx\" (UID: \"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178677 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56d7389c-7094-46ef-ab67-46931e31a6a4-signing-key\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178695 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-serving-cert\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178730 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-bound-sa-token\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178748 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf5nh\" (UniqueName: \"kubernetes.io/projected/e0fc969b-5b42-4649-9109-d049431cae47-kube-api-access-qf5nh\") pod \"downloads-7954f5f757-htd7x\" (UID: \"e0fc969b-5b42-4649-9109-d049431cae47\") " pod="openshift-console/downloads-7954f5f757-htd7x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178769 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-config\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178793 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-etcd-client\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178820 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194db7e1-3e0c-42ca-95db-97ceb1c43433-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178890 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx8gf\" (UniqueName: \"kubernetes.io/projected/194db7e1-3e0c-42ca-95db-97ceb1c43433-kube-api-access-zx8gf\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178912 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htd8g\" (UniqueName: \"kubernetes.io/projected/21716abb-b37d-43a7-bf98-2af519ada148-kube-api-access-htd8g\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.178954 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4588b05d-3afb-4e0f-881a-426d13afff5c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179000 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f31bded6-f3d5-42b4-b479-3c01ce30e73a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179022 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4588b05d-3afb-4e0f-881a-426d13afff5c-config\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179043 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49787c69-3eda-42e4-92d7-9770d605b4e7-trusted-ca\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179062 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e89e8fcd-f436-485c-bc82-18e06f222400-cert\") pod \"ingress-canary-vfkh9\" (UID: \"e89e8fcd-f436-485c-bc82-18e06f222400\") " pod="openshift-ingress-canary/ingress-canary-vfkh9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179081 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tdrg\" (UniqueName: \"kubernetes.io/projected/d4d35181-5fae-443f-acb5-bcfa63181ce7-kube-api-access-8tdrg\") pod \"multus-admission-controller-857f4d67dd-clvbx\" (UID: \"d4d35181-5fae-443f-acb5-bcfa63181ce7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179100 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f31bded6-f3d5-42b4-b479-3c01ce30e73a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179113 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-service-ca-bundle\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179118 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh5gd\" (UniqueName: \"kubernetes.io/projected/1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd-kube-api-access-vh5gd\") pod \"cluster-samples-operator-665b6dd947-lt5dx\" (UID: \"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179170 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-oauth-serving-cert\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179190 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5b5\" (UniqueName: \"kubernetes.io/projected/4367d9de-2fce-45c8-b354-fad4122e7eef-kube-api-access-fq5b5\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179209 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22ced552-66a9-4936-8c25-3e3e8734de79-tmpfs\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179228 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-certs\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179249 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259e2d88-6dda-4a11-b71b-da8eb015e022-serving-cert\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179270 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tn6x\" (UniqueName: \"kubernetes.io/projected/3fcaab52-6a23-416e-a584-8aa43c11ecef-kube-api-access-7tn6x\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179288 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-client\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179335 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm57w\" (UniqueName: \"kubernetes.io/projected/e89e8fcd-f436-485c-bc82-18e06f222400-kube-api-access-mm57w\") pod \"ingress-canary-vfkh9\" (UID: \"e89e8fcd-f436-485c-bc82-18e06f222400\") " pod="openshift-ingress-canary/ingress-canary-vfkh9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179358 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41974777-d629-460d-b0b1-bdd82afb34d4-metrics-tls\") pod \"dns-operator-744455d44c-vkq7q\" (UID: \"41974777-d629-460d-b0b1-bdd82afb34d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-oauth-config\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179420 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2kxp\" (UniqueName: \"kubernetes.io/projected/259e2d88-6dda-4a11-b71b-da8eb015e022-kube-api-access-x2kxp\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179445 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtc25\" (UniqueName: \"kubernetes.io/projected/15a071cd-05cc-4acb-a093-93e526224c69-kube-api-access-vtc25\") pod \"migrator-59844c95c7-c2lr6\" (UID: \"15a071cd-05cc-4acb-a093-93e526224c69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179476 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-service-ca\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179515 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-default-certificate\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxjgp\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-kube-api-access-cxjgp\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179559 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh6mv\" (UniqueName: \"kubernetes.io/projected/41974777-d629-460d-b0b1-bdd82afb34d4-kube-api-access-mh6mv\") pod \"dns-operator-744455d44c-vkq7q\" (UID: \"41974777-d629-460d-b0b1-bdd82afb34d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179580 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-plugins-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179599 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f7b6b2b-459d-44d3-96ab-b798b81342dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179619 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-config\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179644 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdbbx\" (UniqueName: \"kubernetes.io/projected/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-kube-api-access-vdbbx\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179667 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfx5p\" (UniqueName: \"kubernetes.io/projected/26ef06ba-5fad-49aa-a281-5e674a6f6f39-kube-api-access-jfx5p\") pod \"package-server-manager-789f6589d5-czvmz\" (UID: \"26ef06ba-5fad-49aa-a281-5e674a6f6f39\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d4d35181-5fae-443f-acb5-bcfa63181ce7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-clvbx\" (UID: \"d4d35181-5fae-443f-acb5-bcfa63181ce7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179721 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-encryption-config\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179742 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-registration-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179756 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217b0f77-c367-4c25-9965-051d89a335e1-config\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179767 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179824 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217b0f77-c367-4c25-9965-051d89a335e1-trusted-ca\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179852 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4588b05d-3afb-4e0f-881a-426d13afff5c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179897 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93fe75c8-ce81-4489-a557-db6b117d6079-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179920 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-config-volume\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179965 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzhsv\" (UniqueName: \"kubernetes.io/projected/e490bbd4-3eec-4547-9cab-b43ea88e0377-kube-api-access-dzhsv\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.179989 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21716abb-b37d-43a7-bf98-2af519ada148-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180068 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-node-bootstrap-token\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlb5c\" (UniqueName: \"kubernetes.io/projected/56d7389c-7094-46ef-ab67-46931e31a6a4-kube-api-access-xlb5c\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180152 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180178 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-trusted-ca-bundle\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180202 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-socket-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180227 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49787c69-3eda-42e4-92d7-9770d605b4e7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180257 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180287 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47nt\" (UniqueName: \"kubernetes.io/projected/8dd5c785-b167-4b52-8c16-4eea0fcb5685-kube-api-access-z47nt\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6dkb\" (UID: \"8dd5c785-b167-4b52-8c16-4eea0fcb5685\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-config\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180339 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-config-volume\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180363 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ef06ba-5fad-49aa-a281-5e674a6f6f39-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czvmz\" (UID: \"26ef06ba-5fad-49aa-a281-5e674a6f6f39\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180393 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-audit\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180436 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49787c69-3eda-42e4-92d7-9770d605b4e7-metrics-tls\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180461 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgjpc\" (UniqueName: \"kubernetes.io/projected/e2b79e66-39f3-40e8-ad1d-cdd963a10983-kube-api-access-rgjpc\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180492 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180541 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dx44\" (UniqueName: \"kubernetes.io/projected/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-kube-api-access-7dx44\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180565 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180607 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180618 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-tls\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180672 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180698 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180740 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6x5f\" (UniqueName: \"kubernetes.io/projected/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-kube-api-access-g6x5f\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180766 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-serving-cert\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180791 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93fe75c8-ce81-4489-a557-db6b117d6079-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180816 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-service-ca\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180855 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-trusted-ca\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180882 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmdm4\" (UniqueName: \"kubernetes.io/projected/41b9348a-b44f-4ecf-9043-0948b992d64e-kube-api-access-zmdm4\") pod \"auto-csr-approver-29532412-hd7v7\" (UID: \"41b9348a-b44f-4ecf-9043-0948b992d64e\") " pod="openshift-infra/auto-csr-approver-29532412-hd7v7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180908 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56d7389c-7094-46ef-ab67-46931e31a6a4-signing-cabundle\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180932 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22ced552-66a9-4936-8c25-3e3e8734de79-apiservice-cert\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.181538 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.182012 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194db7e1-3e0c-42ca-95db-97ceb1c43433-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.182225 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194db7e1-3e0c-42ca-95db-97ceb1c43433-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.182485 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-image-import-ca\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.182666 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f31bded6-f3d5-42b4-b479-3c01ce30e73a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.180258 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-config\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.181800 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqrpx\" (UniqueName: \"kubernetes.io/projected/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-kube-api-access-pqrpx\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183291 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8mj\" (UniqueName: \"kubernetes.io/projected/7f7b6b2b-459d-44d3-96ab-b798b81342dc-kube-api-access-cc8mj\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183322 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0409dd-6f2c-489e-820f-52019dbb3e0c-serving-cert\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183358 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-config\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-audit\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183448 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217b0f77-c367-4c25-9965-051d89a335e1-serving-cert\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183477 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k99ss\" (UniqueName: \"kubernetes.io/projected/217b0f77-c367-4c25-9965-051d89a335e1-kube-api-access-k99ss\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183524 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96sq9\" (UniqueName: \"kubernetes.io/projected/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-kube-api-access-96sq9\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183554 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxs8\" (UniqueName: \"kubernetes.io/projected/22ced552-66a9-4936-8c25-3e3e8734de79-kube-api-access-jnxs8\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183580 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8fbb\" (UniqueName: \"kubernetes.io/projected/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-kube-api-access-q8fbb\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183605 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k22h\" (UniqueName: \"kubernetes.io/projected/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-kube-api-access-2k22h\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183626 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0409dd-6f2c-489e-820f-52019dbb3e0c-config\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183679 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-metrics-tls\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183706 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-certificates\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183733 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-mountpoint-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183760 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-csi-data-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183782 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f7b6b2b-459d-44d3-96ab-b798b81342dc-srv-cert\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183807 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-secret-volume\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183834 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-proxy-tls\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183860 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlkzh\" (UniqueName: \"kubernetes.io/projected/1f0409dd-6f2c-489e-820f-52019dbb3e0c-kube-api-access-zlkzh\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183885 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e490bbd4-3eec-4547-9cab-b43ea88e0377-images\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183912 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-stats-auth\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183935 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21716abb-b37d-43a7-bf98-2af519ada148-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.184258 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.183318 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-oauth-serving-cert\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.187014 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/217b0f77-c367-4c25-9965-051d89a335e1-trusted-ca\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.187021 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-service-ca\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.187460 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-serving-cert\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.188098 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.188123 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.188245 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:26.688172107 +0000 UTC m=+208.307230790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.188756 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259e2d88-6dda-4a11-b71b-da8eb015e022-config\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.189039 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.189331 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-config\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.190283 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-certificates\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.191104 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.191912 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-lt5dx\" (UID: \"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.192101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-encryption-config\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.194039 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/49787c69-3eda-42e4-92d7-9770d605b4e7-metrics-tls\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.194548 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49787c69-3eda-42e4-92d7-9770d605b4e7-trusted-ca\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.194886 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-etcd-client\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.195124 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-serving-cert\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.195144 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-trusted-ca\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.196898 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217b0f77-c367-4c25-9965-051d89a335e1-serving-cert\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.198836 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259e2d88-6dda-4a11-b71b-da8eb015e022-serving-cert\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.200619 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-oauth-config\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.204823 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.204998 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41974777-d629-460d-b0b1-bdd82afb34d4-metrics-tls\") pod \"dns-operator-744455d44c-vkq7q\" (UID: \"41974777-d629-460d-b0b1-bdd82afb34d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.209291 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f31bded6-f3d5-42b4-b479-3c01ce30e73a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.210914 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-trusted-ca-bundle\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.213065 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-tls\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.218145 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-proxy-tls\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.224095 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.252629 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-bound-sa-token\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.256075 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5sdrm"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.258147 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.268751 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf5nh\" (UniqueName: \"kubernetes.io/projected/e0fc969b-5b42-4649-9109-d049431cae47-kube-api-access-qf5nh\") pod \"downloads-7954f5f757-htd7x\" (UID: \"e0fc969b-5b42-4649-9109-d049431cae47\") " pod="openshift-console/downloads-7954f5f757-htd7x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.272647 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.279248 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" event={"ID":"b9d107c2-6c77-48a2-b2f6-328fb0d83afc","Type":"ContainerStarted","Data":"2aa4ce3b74868284a8fd1f9da5f7df01cc1852314306dfa8b35c3c1e36f73896"} Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.284737 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285152 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93fe75c8-ce81-4489-a557-db6b117d6079-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285191 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-config-volume\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285215 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzhsv\" (UniqueName: \"kubernetes.io/projected/e490bbd4-3eec-4547-9cab-b43ea88e0377-kube-api-access-dzhsv\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285244 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21716abb-b37d-43a7-bf98-2af519ada148-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-node-bootstrap-token\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285296 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlb5c\" (UniqueName: \"kubernetes.io/projected/56d7389c-7094-46ef-ab67-46931e31a6a4-kube-api-access-xlb5c\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-socket-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285357 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47nt\" (UniqueName: \"kubernetes.io/projected/8dd5c785-b167-4b52-8c16-4eea0fcb5685-kube-api-access-z47nt\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6dkb\" (UID: \"8dd5c785-b167-4b52-8c16-4eea0fcb5685\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285380 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-config-volume\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285403 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ef06ba-5fad-49aa-a281-5e674a6f6f39-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czvmz\" (UID: \"26ef06ba-5fad-49aa-a281-5e674a6f6f39\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285431 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgjpc\" (UniqueName: \"kubernetes.io/projected/e2b79e66-39f3-40e8-ad1d-cdd963a10983-kube-api-access-rgjpc\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285467 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285522 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93fe75c8-ce81-4489-a557-db6b117d6079-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285546 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-service-ca\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285571 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmdm4\" (UniqueName: \"kubernetes.io/projected/41b9348a-b44f-4ecf-9043-0948b992d64e-kube-api-access-zmdm4\") pod \"auto-csr-approver-29532412-hd7v7\" (UID: \"41b9348a-b44f-4ecf-9043-0948b992d64e\") " pod="openshift-infra/auto-csr-approver-29532412-hd7v7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285593 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56d7389c-7094-46ef-ab67-46931e31a6a4-signing-cabundle\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285614 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22ced552-66a9-4936-8c25-3e3e8734de79-apiservice-cert\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285648 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8mj\" (UniqueName: \"kubernetes.io/projected/7f7b6b2b-459d-44d3-96ab-b798b81342dc-kube-api-access-cc8mj\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285668 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0409dd-6f2c-489e-820f-52019dbb3e0c-serving-cert\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285700 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96sq9\" (UniqueName: \"kubernetes.io/projected/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-kube-api-access-96sq9\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnxs8\" (UniqueName: \"kubernetes.io/projected/22ced552-66a9-4936-8c25-3e3e8734de79-kube-api-access-jnxs8\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285747 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8fbb\" (UniqueName: \"kubernetes.io/projected/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-kube-api-access-q8fbb\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285768 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k22h\" (UniqueName: \"kubernetes.io/projected/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-kube-api-access-2k22h\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285789 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0409dd-6f2c-489e-820f-52019dbb3e0c-config\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285810 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-metrics-tls\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285834 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-mountpoint-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285859 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-csi-data-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f7b6b2b-459d-44d3-96ab-b798b81342dc-srv-cert\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285898 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-secret-volume\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285920 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlkzh\" (UniqueName: \"kubernetes.io/projected/1f0409dd-6f2c-489e-820f-52019dbb3e0c-kube-api-access-zlkzh\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285941 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e490bbd4-3eec-4547-9cab-b43ea88e0377-images\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285964 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-stats-auth\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.285984 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21716abb-b37d-43a7-bf98-2af519ada148-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286007 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08265db9-8c1f-47c9-b812-324889b64e93-config\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286025 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08265db9-8c1f-47c9-b812-324889b64e93-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286044 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22ced552-66a9-4936-8c25-3e3e8734de79-webhook-cert\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286065 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4367d9de-2fce-45c8-b354-fad4122e7eef-serving-cert\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286098 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-metrics-certs\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286120 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08265db9-8c1f-47c9-b812-324889b64e93-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286141 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e490bbd4-3eec-4547-9cab-b43ea88e0377-proxy-tls\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286160 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e490bbd4-3eec-4547-9cab-b43ea88e0377-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286180 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkg6v\" (UniqueName: \"kubernetes.io/projected/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-kube-api-access-kkg6v\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286204 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8dd5c785-b167-4b52-8c16-4eea0fcb5685-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6dkb\" (UID: \"8dd5c785-b167-4b52-8c16-4eea0fcb5685\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93fe75c8-ce81-4489-a557-db6b117d6079-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286258 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286278 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2b79e66-39f3-40e8-ad1d-cdd963a10983-service-ca-bundle\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286276 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21716abb-b37d-43a7-bf98-2af519ada148-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286308 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-ca\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286330 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56d7389c-7094-46ef-ab67-46931e31a6a4-signing-key\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286364 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htd8g\" (UniqueName: \"kubernetes.io/projected/21716abb-b37d-43a7-bf98-2af519ada148-kube-api-access-htd8g\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.286396 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4588b05d-3afb-4e0f-881a-426d13afff5c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.287323 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-config-volume\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.288516 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-csi-data-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.288961 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh5gd\" (UniqueName: \"kubernetes.io/projected/1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd-kube-api-access-vh5gd\") pod \"cluster-samples-operator-665b6dd947-lt5dx\" (UID: \"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.289268 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93fe75c8-ce81-4489-a557-db6b117d6079-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.289366 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56d7389c-7094-46ef-ab67-46931e31a6a4-signing-cabundle\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.289523 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-socket-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.290096 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-config-volume\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.290684 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4588b05d-3afb-4e0f-881a-426d13afff5c-config\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.290818 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:26.790459399 +0000 UTC m=+208.409517902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291278 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e89e8fcd-f436-485c-bc82-18e06f222400-cert\") pod \"ingress-canary-vfkh9\" (UID: \"e89e8fcd-f436-485c-bc82-18e06f222400\") " pod="openshift-ingress-canary/ingress-canary-vfkh9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291346 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tdrg\" (UniqueName: \"kubernetes.io/projected/d4d35181-5fae-443f-acb5-bcfa63181ce7-kube-api-access-8tdrg\") pod \"multus-admission-controller-857f4d67dd-clvbx\" (UID: \"d4d35181-5fae-443f-acb5-bcfa63181ce7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291387 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq5b5\" (UniqueName: \"kubernetes.io/projected/4367d9de-2fce-45c8-b354-fad4122e7eef-kube-api-access-fq5b5\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291422 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22ced552-66a9-4936-8c25-3e3e8734de79-tmpfs\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291466 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-certs\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291445 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-service-ca\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291548 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tn6x\" (UniqueName: \"kubernetes.io/projected/3fcaab52-6a23-416e-a584-8aa43c11ecef-kube-api-access-7tn6x\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291578 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-client\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.291794 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08265db9-8c1f-47c9-b812-324889b64e93-config\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.292964 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2b79e66-39f3-40e8-ad1d-cdd963a10983-service-ca-bundle\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.293203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-node-bootstrap-token\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.293233 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93fe75c8-ce81-4489-a557-db6b117d6079-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.293330 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.294162 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-ca\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.295133 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8dd5c785-b167-4b52-8c16-4eea0fcb5685-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6dkb\" (UID: \"8dd5c785-b167-4b52-8c16-4eea0fcb5685\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.295234 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm57w\" (UniqueName: \"kubernetes.io/projected/e89e8fcd-f436-485c-bc82-18e06f222400-kube-api-access-mm57w\") pod \"ingress-canary-vfkh9\" (UID: \"e89e8fcd-f436-485c-bc82-18e06f222400\") " pod="openshift-ingress-canary/ingress-canary-vfkh9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.292974 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08265db9-8c1f-47c9-b812-324889b64e93-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.295748 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.295859 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22ced552-66a9-4936-8c25-3e3e8734de79-webhook-cert\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.296452 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e490bbd4-3eec-4547-9cab-b43ea88e0377-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.296656 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e490bbd4-3eec-4547-9cab-b43ea88e0377-images\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.301826 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4367d9de-2fce-45c8-b354-fad4122e7eef-etcd-client\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.301902 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f7b6b2b-459d-44d3-96ab-b798b81342dc-srv-cert\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.302140 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0409dd-6f2c-489e-820f-52019dbb3e0c-config\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.302337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4588b05d-3afb-4e0f-881a-426d13afff5c-config\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.302357 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtc25\" (UniqueName: \"kubernetes.io/projected/15a071cd-05cc-4acb-a093-93e526224c69-kube-api-access-vtc25\") pod \"migrator-59844c95c7-c2lr6\" (UID: \"15a071cd-05cc-4acb-a093-93e526224c69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.311321 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/22ced552-66a9-4936-8c25-3e3e8734de79-tmpfs\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.313890 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56d7389c-7094-46ef-ab67-46931e31a6a4-signing-key\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314150 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-stats-auth\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314286 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-default-certificate\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314328 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-metrics-certs\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314472 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-mountpoint-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314559 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-plugins-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314639 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f7b6b2b-459d-44d3-96ab-b798b81342dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314481 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21716abb-b37d-43a7-bf98-2af519ada148-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314879 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-plugins-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314893 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.316416 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-config\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.316482 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfx5p\" (UniqueName: \"kubernetes.io/projected/26ef06ba-5fad-49aa-a281-5e674a6f6f39-kube-api-access-jfx5p\") pod \"package-server-manager-789f6589d5-czvmz\" (UID: \"26ef06ba-5fad-49aa-a281-5e674a6f6f39\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.315212 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0409dd-6f2c-489e-820f-52019dbb3e0c-serving-cert\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.315429 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-certs\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.315531 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-metrics-tls\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.315625 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4367d9de-2fce-45c8-b354-fad4122e7eef-serving-cert\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.315941 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ef06ba-5fad-49aa-a281-5e674a6f6f39-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czvmz\" (UID: \"26ef06ba-5fad-49aa-a281-5e674a6f6f39\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.316644 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d4d35181-5fae-443f-acb5-bcfa63181ce7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-clvbx\" (UID: \"d4d35181-5fae-443f-acb5-bcfa63181ce7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314988 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-secret-volume\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.316085 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22ced552-66a9-4936-8c25-3e3e8734de79-apiservice-cert\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.316755 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-registration-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.314958 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e490bbd4-3eec-4547-9cab-b43ea88e0377-proxy-tls\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.316860 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3fcaab52-6a23-416e-a584-8aa43c11ecef-registration-dir\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.316888 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4588b05d-3afb-4e0f-881a-426d13afff5c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.317890 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f7b6b2b-459d-44d3-96ab-b798b81342dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.318872 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4367d9de-2fce-45c8-b354-fad4122e7eef-config\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.319101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e89e8fcd-f436-485c-bc82-18e06f222400-cert\") pod \"ingress-canary-vfkh9\" (UID: \"e89e8fcd-f436-485c-bc82-18e06f222400\") " pod="openshift-ingress-canary/ingress-canary-vfkh9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.319719 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d4d35181-5fae-443f-acb5-bcfa63181ce7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-clvbx\" (UID: \"d4d35181-5fae-443f-acb5-bcfa63181ce7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.323006 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxjgp\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-kube-api-access-cxjgp\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.324089 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx8gf\" (UniqueName: \"kubernetes.io/projected/194db7e1-3e0c-42ca-95db-97ceb1c43433-kube-api-access-zx8gf\") pod \"openshift-apiserver-operator-796bbdcf4f-bh6tr\" (UID: \"194db7e1-3e0c-42ca-95db-97ceb1c43433\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.326297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e2b79e66-39f3-40e8-ad1d-cdd963a10983-default-certificate\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.328114 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4588b05d-3afb-4e0f-881a-426d13afff5c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.346338 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6x5f\" (UniqueName: \"kubernetes.io/projected/95db2c7e-f71c-490b-9fd8-ec4e9e127e8e-kube-api-access-g6x5f\") pod \"cluster-image-registry-operator-dc59b4c8b-tpsbl\" (UID: \"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.380566 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dx44\" (UniqueName: \"kubernetes.io/projected/f4e9c817-5d2a-4d09-aaed-54b8a3735c25-kube-api-access-7dx44\") pod \"machine-config-controller-84d6567774-md6w5\" (UID: \"f4e9c817-5d2a-4d09-aaed-54b8a3735c25\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.383703 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s92k8\" (UniqueName: \"kubernetes.io/projected/49787c69-3eda-42e4-92d7-9770d605b4e7-kube-api-access-s92k8\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.399658 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-htd7x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.403375 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k99ss\" (UniqueName: \"kubernetes.io/projected/217b0f77-c367-4c25-9965-051d89a335e1-kube-api-access-k99ss\") pod \"console-operator-58897d9998-nq62b\" (UID: \"217b0f77-c367-4c25-9965-051d89a335e1\") " pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.417460 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.420965 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.421519 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:26.921485198 +0000 UTC m=+208.540543691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.427309 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqrpx\" (UniqueName: \"kubernetes.io/projected/a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7-kube-api-access-pqrpx\") pod \"apiserver-76f77b778f-bpq5w\" (UID: \"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7\") " pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.428163 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.440446 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.447350 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdbbx\" (UniqueName: \"kubernetes.io/projected/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-kube-api-access-vdbbx\") pod \"console-f9d7485db-nsv6c\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.468755 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh6mv\" (UniqueName: \"kubernetes.io/projected/41974777-d629-460d-b0b1-bdd82afb34d4-kube-api-access-mh6mv\") pod \"dns-operator-744455d44c-vkq7q\" (UID: \"41974777-d629-460d-b0b1-bdd82afb34d4\") " pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.479445 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.485356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2kxp\" (UniqueName: \"kubernetes.io/projected/259e2d88-6dda-4a11-b71b-da8eb015e022-kube-api-access-x2kxp\") pod \"authentication-operator-69f744f599-qdr8x\" (UID: \"259e2d88-6dda-4a11-b71b-da8eb015e022\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: W0224 14:52:26.489202 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf256a43e_ecd0_4cf0_8d2c_5e662a455533.slice/crio-cd077ae631a5811e0df1b3fbfa8368e73184e42f8d80f4120116f90183f06285 WatchSource:0}: Error finding container cd077ae631a5811e0df1b3fbfa8368e73184e42f8d80f4120116f90183f06285: Status 404 returned error can't find the container with id cd077ae631a5811e0df1b3fbfa8368e73184e42f8d80f4120116f90183f06285 Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.495895 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" Feb 24 14:52:26 crc kubenswrapper[4982]: W0224 14:52:26.496351 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf53d5626_04b0_455d_a4bc_96207b51b221.slice/crio-c5eae3cf7ea372eecbc0d2488b93d00a55add1798441f37c6bab114ca09d3341 WatchSource:0}: Error finding container c5eae3cf7ea372eecbc0d2488b93d00a55add1798441f37c6bab114ca09d3341: Status 404 returned error can't find the container with id c5eae3cf7ea372eecbc0d2488b93d00a55add1798441f37c6bab114ca09d3341 Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.506101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49787c69-3eda-42e4-92d7-9770d605b4e7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jjpjb\" (UID: \"49787c69-3eda-42e4-92d7-9770d605b4e7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.506273 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.522478 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.522657 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.022625559 +0000 UTC m=+208.641684052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.523111 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.523429 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.023421631 +0000 UTC m=+208.642480124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.544333 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzhsv\" (UniqueName: \"kubernetes.io/projected/e490bbd4-3eec-4547-9cab-b43ea88e0377-kube-api-access-dzhsv\") pod \"machine-config-operator-74547568cd-s7g2b\" (UID: \"e490bbd4-3eec-4547-9cab-b43ea88e0377\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.565242 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5kccw"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.575000 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.578234 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmdm4\" (UniqueName: \"kubernetes.io/projected/41b9348a-b44f-4ecf-9043-0948b992d64e-kube-api-access-zmdm4\") pod \"auto-csr-approver-29532412-hd7v7\" (UID: \"41b9348a-b44f-4ecf-9043-0948b992d64e\") " pod="openshift-infra/auto-csr-approver-29532412-hd7v7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.589354 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlb5c\" (UniqueName: \"kubernetes.io/projected/56d7389c-7094-46ef-ab67-46931e31a6a4-kube-api-access-xlb5c\") pod \"service-ca-9c57cc56f-c5jz8\" (UID: \"56d7389c-7094-46ef-ab67-46931e31a6a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.600319 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.605331 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f5kdd"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.611601 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47nt\" (UniqueName: \"kubernetes.io/projected/8dd5c785-b167-4b52-8c16-4eea0fcb5685-kube-api-access-z47nt\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6dkb\" (UID: \"8dd5c785-b167-4b52-8c16-4eea0fcb5685\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.624368 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.624716 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.124679945 +0000 UTC m=+208.743738438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.624946 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.625290 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.125275751 +0000 UTC m=+208.744334244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.626463 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.630138 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgjpc\" (UniqueName: \"kubernetes.io/projected/e2b79e66-39f3-40e8-ad1d-cdd963a10983-kube-api-access-rgjpc\") pod \"router-default-5444994796-9x558\" (UID: \"e2b79e66-39f3-40e8-ad1d-cdd963a10983\") " pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.646786 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkg6v\" (UniqueName: \"kubernetes.io/projected/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-kube-api-access-kkg6v\") pod \"collect-profiles-29532405-xmtjp\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.664406 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93fe75c8-ce81-4489-a557-db6b117d6079-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pw68d\" (UID: \"93fe75c8-ce81-4489-a557-db6b117d6079\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.665968 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.666764 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.681986 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.696948 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08265db9-8c1f-47c9-b812-324889b64e93-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhxbw\" (UID: \"08265db9-8c1f-47c9-b812-324889b64e93\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.698544 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.713146 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlkzh\" (UniqueName: \"kubernetes.io/projected/1f0409dd-6f2c-489e-820f-52019dbb3e0c-kube-api-access-zlkzh\") pod \"service-ca-operator-777779d784-jhn22\" (UID: \"1f0409dd-6f2c-489e-820f-52019dbb3e0c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.722508 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.723231 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.725469 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.726035 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.226013232 +0000 UTC m=+208.845071735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.730139 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k22h\" (UniqueName: \"kubernetes.io/projected/67a3e1e3-70ef-4fe0-b72c-f0156143ad4f-kube-api-access-2k22h\") pod \"dns-default-ndstv\" (UID: \"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f\") " pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.730580 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.738466 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-htd7x"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.750622 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8mj\" (UniqueName: \"kubernetes.io/projected/7f7b6b2b-459d-44d3-96ab-b798b81342dc-kube-api-access-cc8mj\") pod \"catalog-operator-68c6474976-tlx7h\" (UID: \"7f7b6b2b-459d-44d3-96ab-b798b81342dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.769403 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm57w\" (UniqueName: \"kubernetes.io/projected/e89e8fcd-f436-485c-bc82-18e06f222400-kube-api-access-mm57w\") pod \"ingress-canary-vfkh9\" (UID: \"e89e8fcd-f436-485c-bc82-18e06f222400\") " pod="openshift-ingress-canary/ingress-canary-vfkh9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.774662 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.777692 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.783455 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vfkh9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.791623 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96sq9\" (UniqueName: \"kubernetes.io/projected/f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df-kube-api-access-96sq9\") pod \"machine-config-server-nktkq\" (UID: \"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df\") " pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.800515 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8fbb\" (UniqueName: \"kubernetes.io/projected/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-kube-api-access-q8fbb\") pod \"marketplace-operator-79b997595-zv2x7\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.813589 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.827701 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.828120 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.328102709 +0000 UTC m=+208.947161202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.830082 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htd8g\" (UniqueName: \"kubernetes.io/projected/21716abb-b37d-43a7-bf98-2af519ada148-kube-api-access-htd8g\") pod \"kube-storage-version-migrator-operator-b67b599dd-7l74l\" (UID: \"21716abb-b37d-43a7-bf98-2af519ada148\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.831000 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.838945 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.843821 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5"] Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.852886 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnxs8\" (UniqueName: \"kubernetes.io/projected/22ced552-66a9-4936-8c25-3e3e8734de79-kube-api-access-jnxs8\") pod \"packageserver-d55dfcdfc-8b9r9\" (UID: \"22ced552-66a9-4936-8c25-3e3e8734de79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.864393 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.871536 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtc25\" (UniqueName: \"kubernetes.io/projected/15a071cd-05cc-4acb-a093-93e526224c69-kube-api-access-vtc25\") pod \"migrator-59844c95c7-c2lr6\" (UID: \"15a071cd-05cc-4acb-a093-93e526224c69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.884273 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.905570 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tn6x\" (UniqueName: \"kubernetes.io/projected/3fcaab52-6a23-416e-a584-8aa43c11ecef-kube-api-access-7tn6x\") pod \"csi-hostpathplugin-ffjpg\" (UID: \"3fcaab52-6a23-416e-a584-8aa43c11ecef\") " pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.905696 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tdrg\" (UniqueName: \"kubernetes.io/projected/d4d35181-5fae-443f-acb5-bcfa63181ce7-kube-api-access-8tdrg\") pod \"multus-admission-controller-857f4d67dd-clvbx\" (UID: \"d4d35181-5fae-443f-acb5-bcfa63181ce7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" Feb 24 14:52:26 crc kubenswrapper[4982]: W0224 14:52:26.912628 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e9c817_5d2a_4d09_aaed_54b8a3735c25.slice/crio-498fd43e3f9f8c20eb14b2a645e4a54760875082aa54d30ababa3f54de41beb6 WatchSource:0}: Error finding container 498fd43e3f9f8c20eb14b2a645e4a54760875082aa54d30ababa3f54de41beb6: Status 404 returned error can't find the container with id 498fd43e3f9f8c20eb14b2a645e4a54760875082aa54d30ababa3f54de41beb6 Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.921787 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4588b05d-3afb-4e0f-881a-426d13afff5c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ndrwf\" (UID: \"4588b05d-3afb-4e0f-881a-426d13afff5c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.931406 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.931800 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.931944 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.431924143 +0000 UTC m=+209.050982636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.933539 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:26 crc kubenswrapper[4982]: E0224 14:52:26.933898 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.433882666 +0000 UTC m=+209.052941159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.945137 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.949123 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq5b5\" (UniqueName: \"kubernetes.io/projected/4367d9de-2fce-45c8-b354-fad4122e7eef-kube-api-access-fq5b5\") pod \"etcd-operator-b45778765-tsz6d\" (UID: \"4367d9de-2fce-45c8-b354-fad4122e7eef\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.958168 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.979094 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:26 crc kubenswrapper[4982]: I0224 14:52:26.979116 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfx5p\" (UniqueName: \"kubernetes.io/projected/26ef06ba-5fad-49aa-a281-5e674a6f6f39-kube-api-access-jfx5p\") pod \"package-server-manager-789f6589d5-czvmz\" (UID: \"26ef06ba-5fad-49aa-a281-5e674a6f6f39\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.014743 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.025340 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.035868 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.036041 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.535820669 +0000 UTC m=+209.154879162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.036320 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.036833 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.536823035 +0000 UTC m=+209.155881528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.040490 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nktkq" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.053264 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.110391 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b"] Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.116742 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.124679 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.137814 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.137977 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.637941677 +0000 UTC m=+209.257000170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.138103 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.138435 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.638413669 +0000 UTC m=+209.257472152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.148482 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.171763 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.206629 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nsv6c"] Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.218607 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.244242 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.244568 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.744543226 +0000 UTC m=+209.363601719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.244685 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.245279 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.745251434 +0000 UTC m=+209.364309927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.285581 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" event={"ID":"b9d107c2-6c77-48a2-b2f6-328fb0d83afc","Type":"ContainerStarted","Data":"8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.286037 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.288746 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" event={"ID":"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3","Type":"ContainerStarted","Data":"cedd38f2935f3976ec793f7a595f9f7d3c9384b97068eca856259b2059ce21b2"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.288774 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" event={"ID":"91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3","Type":"ContainerStarted","Data":"5f49875887bb0659cabd25186a0f384d382d7c31f4b3e20ee422a41886f8596a"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.290084 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.298249 4982 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-l5b9h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.298302 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" podUID="b9d107c2-6c77-48a2-b2f6-328fb0d83afc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.300229 4982 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-64kvf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.300255 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" podUID="91ba2ff0-777f-4d5d-abb1-2ae554a7c3b3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.301880 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" event={"ID":"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd","Type":"ContainerStarted","Data":"2cdd110ae28480e617a44cc694c81fbdfe50b014d181bbed1d635cfbfe299ba5"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.308870 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" event={"ID":"f4e9c817-5d2a-4d09-aaed-54b8a3735c25","Type":"ContainerStarted","Data":"498fd43e3f9f8c20eb14b2a645e4a54760875082aa54d30ababa3f54de41beb6"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.317156 4982 generic.go:334] "Generic (PLEG): container finished" podID="7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6" containerID="bf684037fcc0aee4e62b1c73bbd2bdb0f3696dc9835d29d4f0b74636355665ec" exitCode=0 Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.317230 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" event={"ID":"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6","Type":"ContainerDied","Data":"bf684037fcc0aee4e62b1c73bbd2bdb0f3696dc9835d29d4f0b74636355665ec"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.317257 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" event={"ID":"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6","Type":"ContainerStarted","Data":"e51394bb4606490f5fd1bac20f5919ba0a811d945a95d7e3a1ab46c344a87637"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.323077 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" event={"ID":"1b0d00bf-0cb6-4fa2-9561-edafa4a10082","Type":"ContainerStarted","Data":"471403beab3e3158ddbde9efef6321288d30bd6381f2c957629328fe1fa3e5d5"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.323136 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" event={"ID":"1b0d00bf-0cb6-4fa2-9561-edafa4a10082","Type":"ContainerStarted","Data":"3fda2c6e1910c623f58fc87efe35e731478caa219be55bc9b3457bd99b2183ae"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.323150 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" event={"ID":"1b0d00bf-0cb6-4fa2-9561-edafa4a10082","Type":"ContainerStarted","Data":"ca30bb5ab5ffa567876fd8f40a0f763b4df2fb21941b6ea1fa85007b630e3e73"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.324773 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-htd7x" event={"ID":"e0fc969b-5b42-4649-9109-d049431cae47","Type":"ContainerStarted","Data":"e9f85ff13a387dc7db843b557a7505bd272d56fceb3a944e6a30f600390568ac"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.327675 4982 generic.go:334] "Generic (PLEG): container finished" podID="f53d5626-04b0-455d-a4bc-96207b51b221" containerID="50f5ecbb7ea38391e2b18f4539c6226de17a40807fd2571590ddca0a3a4a62b3" exitCode=0 Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.327739 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" event={"ID":"f53d5626-04b0-455d-a4bc-96207b51b221","Type":"ContainerDied","Data":"50f5ecbb7ea38391e2b18f4539c6226de17a40807fd2571590ddca0a3a4a62b3"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.327762 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" event={"ID":"f53d5626-04b0-455d-a4bc-96207b51b221","Type":"ContainerStarted","Data":"c5eae3cf7ea372eecbc0d2488b93d00a55add1798441f37c6bab114ca09d3341"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.329522 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" event={"ID":"b36f6a63-d48a-4adb-bdb0-3b63c7679981","Type":"ContainerStarted","Data":"d8fdae7226bfcd7f1c578a08adae79255d45558a03e1e0827cddd09315ba33af"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.331223 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" event={"ID":"37b93651-de65-42d6-96b0-560298df3222","Type":"ContainerStarted","Data":"88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.331252 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" event={"ID":"37b93651-de65-42d6-96b0-560298df3222","Type":"ContainerStarted","Data":"b26bb774a843761d909fb183ebf3bec2b7f0a48da46625d75bc3a8ac07086d24"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.331426 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.340519 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" event={"ID":"9102e968-b893-4021-af5f-41b653664820","Type":"ContainerStarted","Data":"5d949ab10efd530f286845ed4d5457df463cbc8d8de0d94b7f1d805bd318ef32"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.340549 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" event={"ID":"9102e968-b893-4021-af5f-41b653664820","Type":"ContainerStarted","Data":"ed4d20ef7eabfb84570d2e52ad525b6cb2d50bda43a4d813a573f09f8ed11bf3"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.342729 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" event={"ID":"f256a43e-ecd0-4cf0-8d2c-5e662a455533","Type":"ContainerStarted","Data":"de3aa6d29c5dc6047b85e3a8297a609283e131a8efe5c7a3976eb5fb46bff0fe"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.342784 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" event={"ID":"f256a43e-ecd0-4cf0-8d2c-5e662a455533","Type":"ContainerStarted","Data":"cd077ae631a5811e0df1b3fbfa8368e73184e42f8d80f4120116f90183f06285"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.344515 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" event={"ID":"194db7e1-3e0c-42ca-95db-97ceb1c43433","Type":"ContainerStarted","Data":"343f1f54c7f47178f95e4ec282f97da513450fbb4b5a13b60b68d6136ed9e1a3"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.345520 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.345677 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.845660656 +0000 UTC m=+209.464719149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.345810 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.346167 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.846154769 +0000 UTC m=+209.465213252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.346687 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" event={"ID":"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e","Type":"ContainerStarted","Data":"0716a9adf10bf4c07135cc1a43fdc3f4c5001e0dcdc1fbdb4296be7f42f4394c"} Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.446705 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.446916 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.94689026 +0000 UTC m=+209.565948753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.447042 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.448991 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:27.948976527 +0000 UTC m=+209.568035020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.551831 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.552057 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.052033429 +0000 UTC m=+209.671091922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.552709 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.553220 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.053200121 +0000 UTC m=+209.672258614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.658100 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.658424 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.158409392 +0000 UTC m=+209.777467885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.760938 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.762273 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.262240927 +0000 UTC m=+209.881299420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.772800 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qdr8x"] Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.836385 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5jz8"] Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.851475 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bpq5w"] Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.864938 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.865571 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.365540996 +0000 UTC m=+209.984599499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.873390 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq62b"] Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.884640 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb"] Feb 24 14:52:27 crc kubenswrapper[4982]: I0224 14:52:27.966978 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:27 crc kubenswrapper[4982]: E0224 14:52:27.967413 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.467399477 +0000 UTC m=+210.086457970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.067983 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.068364 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.568350123 +0000 UTC m=+210.187408616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: W0224 14:52:28.112083 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda33527e4_ee5d_4ee2_a2df_b3ab5c2231c7.slice/crio-8f39c257b65ea9818b21bddb9860d5067d72bc710608a5f7f2a9f20af9bddc99 WatchSource:0}: Error finding container 8f39c257b65ea9818b21bddb9860d5067d72bc710608a5f7f2a9f20af9bddc99: Status 404 returned error can't find the container with id 8f39c257b65ea9818b21bddb9860d5067d72bc710608a5f7f2a9f20af9bddc99 Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.169387 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.170291 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.670276056 +0000 UTC m=+210.289334549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.215622 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.277001 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.277861 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.777815631 +0000 UTC m=+210.396874124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.323903 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.324345 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.824332487 +0000 UTC m=+210.443390980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.423552 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532412-hd7v7"] Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.429554 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.429960 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:28.929945228 +0000 UTC m=+210.549003721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.443234 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" event={"ID":"194db7e1-3e0c-42ca-95db-97ceb1c43433","Type":"ContainerStarted","Data":"6bb6100c1ca4bb39a2b68e7f42513bb9df713d94ec7a6be79260749eb889f8ba"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.478600 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vfkh9"] Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.491343 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.500848 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d"] Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.555870 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.556476 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.056448486 +0000 UTC m=+210.675506979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.575911 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vkq7q"] Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.576404 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" podStartSLOduration=174.576366713 podStartE2EDuration="2m54.576366713s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:28.551030199 +0000 UTC m=+210.170088692" watchObservedRunningTime="2026-02-24 14:52:28.576366713 +0000 UTC m=+210.195425206" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.601268 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" podStartSLOduration=174.601238235 podStartE2EDuration="2m54.601238235s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:28.578252244 +0000 UTC m=+210.197310727" watchObservedRunningTime="2026-02-24 14:52:28.601238235 +0000 UTC m=+210.220296728" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.602833 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l"] Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.602878 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9x558" event={"ID":"e2b79e66-39f3-40e8-ad1d-cdd963a10983","Type":"ContainerStarted","Data":"0b88a7cf6d40a3e0ecc3959322ae472c5816e25e9bce9c9815b6fe6b3aceb695"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.609034 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb"] Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.609294 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lccsh" podStartSLOduration=175.609284212 podStartE2EDuration="2m55.609284212s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:28.604092642 +0000 UTC m=+210.223151135" watchObservedRunningTime="2026-02-24 14:52:28.609284212 +0000 UTC m=+210.228342705" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.617349 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" event={"ID":"56d7389c-7094-46ef-ab67-46931e31a6a4","Type":"ContainerStarted","Data":"298b4e871a838ca9d0a96df4eebcfead6ad736fafcf5dfc01b48fb75f1d6de6f"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.633123 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nsv6c" event={"ID":"dfdbf1b1-2b07-4bff-ab32-759a436b4a78","Type":"ContainerStarted","Data":"2cb9dcd039a072d86f46a706e506f719f4e0247856d99b3ae3254c5c95e2c854"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.655322 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sdrm" podStartSLOduration=174.655296604 podStartE2EDuration="2m54.655296604s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:28.632387646 +0000 UTC m=+210.251446139" watchObservedRunningTime="2026-02-24 14:52:28.655296604 +0000 UTC m=+210.274355097" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.662429 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.663953 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.163922128 +0000 UTC m=+210.782980621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.672098 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" event={"ID":"9102e968-b893-4021-af5f-41b653664820","Type":"ContainerStarted","Data":"474e2e71915c9fba3b0033c7afd157729ccdef66a4534bcf55f8144eec08e831"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.719244 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" event={"ID":"b36f6a63-d48a-4adb-bdb0-3b63c7679981","Type":"ContainerStarted","Data":"c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.722597 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.724665 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" podStartSLOduration=175.724630817 podStartE2EDuration="2m55.724630817s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:28.720056964 +0000 UTC m=+210.339115467" watchObservedRunningTime="2026-02-24 14:52:28.724630817 +0000 UTC m=+210.343689310" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.744212 4982 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-f5kdd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.744341 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" podUID="b36f6a63-d48a-4adb-bdb0-3b63c7679981" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.765090 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.765622 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.265601144 +0000 UTC m=+210.884659667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.775327 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bh6tr" podStartSLOduration=175.775305426 podStartE2EDuration="2m55.775305426s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:28.76437839 +0000 UTC m=+210.383436883" watchObservedRunningTime="2026-02-24 14:52:28.775305426 +0000 UTC m=+210.394363919" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.781691 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" event={"ID":"f4e9c817-5d2a-4d09-aaed-54b8a3735c25","Type":"ContainerStarted","Data":"4b84e1a012fac81c09fd959c1170340fc09986a031a2df944d7a91e582ecfd4b"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.791935 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" event={"ID":"8dd5c785-b167-4b52-8c16-4eea0fcb5685","Type":"ContainerStarted","Data":"876d6b8b09e217b2075bd1ac2f10bbf818a268b887b5b385ed14022d1ca5d00a"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.809150 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" event={"ID":"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd","Type":"ContainerStarted","Data":"dbd7f2f9f6c6bf8d3ce568738e39c57c6d07748f49aa9294a39862143b18d0a8"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.849111 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" event={"ID":"95db2c7e-f71c-490b-9fd8-ec4e9e127e8e","Type":"ContainerStarted","Data":"0ec60a6ee1c9e9157e18dd7de1b84ccab65663e8037e339c6e6d4054eb53274a"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.868315 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.869059 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.869155 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.36914034 +0000 UTC m=+210.988198833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.873224 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.873264 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.873446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" event={"ID":"e490bbd4-3eec-4547-9cab-b43ea88e0377","Type":"ContainerStarted","Data":"085cb97e27c3aca3ab5e0c76b3fe5201716142b78b74afa198001141a3edf509"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.876798 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rj9bm" podStartSLOduration=175.876786577 podStartE2EDuration="2m55.876786577s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:28.810742542 +0000 UTC m=+210.429801035" watchObservedRunningTime="2026-02-24 14:52:28.876786577 +0000 UTC m=+210.495845070" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.886426 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" event={"ID":"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7","Type":"ContainerStarted","Data":"8f39c257b65ea9818b21bddb9860d5067d72bc710608a5f7f2a9f20af9bddc99"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.890736 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nq62b" event={"ID":"217b0f77-c367-4c25-9965-051d89a335e1","Type":"ContainerStarted","Data":"1a050fe808c4b469e90624311b0ddb82331b2ce65db3d6f4cf3edc6ee2d9ba90"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.937668 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" event={"ID":"7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6","Type":"ContainerStarted","Data":"e6dadbeadfdfa248c102609e8e866a6d7bd3c8efc45e41d17edb78b0032ff6bb"} Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.938593 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.977212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:28 crc kubenswrapper[4982]: E0224 14:52:28.977667 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.477647211 +0000 UTC m=+211.096705704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:28 crc kubenswrapper[4982]: I0224 14:52:28.995720 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" event={"ID":"259e2d88-6dda-4a11-b71b-da8eb015e022","Type":"ContainerStarted","Data":"c51143027e7ca4f2d8edef5a57e0742dc6edcd82c43dea670b1ba9a0759441fe"} Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.044443 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-htd7x" event={"ID":"e0fc969b-5b42-4649-9109-d049431cae47","Type":"ContainerStarted","Data":"7dabe8e621eac0e2a60010b3f78633268dddcde4bcac5d56649cdfe5d58a4823"} Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.046103 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-htd7x" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.062046 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9x558" podStartSLOduration=176.062020229 podStartE2EDuration="2m56.062020229s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:28.894016822 +0000 UTC m=+210.513075315" watchObservedRunningTime="2026-02-24 14:52:29.062020229 +0000 UTC m=+210.681078722" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.079484 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-htd7x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.079554 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-htd7x" podUID="e0fc969b-5b42-4649-9109-d049431cae47" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.080052 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:29 crc kubenswrapper[4982]: E0224 14:52:29.081190 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.581162936 +0000 UTC m=+211.200221429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.093626 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" podStartSLOduration=176.093608151 podStartE2EDuration="2m56.093608151s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.060153289 +0000 UTC m=+210.679211782" watchObservedRunningTime="2026-02-24 14:52:29.093608151 +0000 UTC m=+210.712666644" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.095642 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tsz6d"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.110415 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zv2x7"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.111823 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.126721 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nktkq" event={"ID":"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df","Type":"ContainerStarted","Data":"34ea05cbe6abab9c89f8237f7fe15be5e50d264f661056ac18e1c96a15ba0cc2"} Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.134685 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jhn22"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.143424 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.146836 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-nsv6c" podStartSLOduration=176.146816868 podStartE2EDuration="2m56.146816868s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.110045096 +0000 UTC m=+210.729103589" watchObservedRunningTime="2026-02-24 14:52:29.146816868 +0000 UTC m=+210.765875361" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.173099 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-64kvf" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.173140 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.177545 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" podStartSLOduration=176.177522048 podStartE2EDuration="2m56.177522048s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.170117448 +0000 UTC m=+210.789175941" watchObservedRunningTime="2026-02-24 14:52:29.177522048 +0000 UTC m=+210.796580531" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.232446 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ndstv"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.242818 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.242881 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.248583 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.249109 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.251731 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ffjpg"] Feb 24 14:52:29 crc kubenswrapper[4982]: E0224 14:52:29.267930 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.767906109 +0000 UTC m=+211.386964602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.322259 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpsbl" podStartSLOduration=176.322239387 podStartE2EDuration="2m56.322239387s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.211970509 +0000 UTC m=+210.831029012" watchObservedRunningTime="2026-02-24 14:52:29.322239387 +0000 UTC m=+210.941297880" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.324567 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-clvbx"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.329354 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.331730 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l5b9h"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.331764 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.331822 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-htd7x" podStartSLOduration=176.331806875 podStartE2EDuration="2m56.331806875s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.32644679 +0000 UTC m=+210.945505283" watchObservedRunningTime="2026-02-24 14:52:29.331806875 +0000 UTC m=+210.950865368" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.358321 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:29 crc kubenswrapper[4982]: E0224 14:52:29.359079 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.859061851 +0000 UTC m=+211.478120344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.389891 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" podStartSLOduration=176.389868983 podStartE2EDuration="2m56.389868983s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.35718131 +0000 UTC m=+210.976239813" watchObservedRunningTime="2026-02-24 14:52:29.389868983 +0000 UTC m=+211.008927476" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.390190 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz"] Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.394098 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" podStartSLOduration=175.394077136 podStartE2EDuration="2m55.394077136s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.392258568 +0000 UTC m=+211.011317081" watchObservedRunningTime="2026-02-24 14:52:29.394077136 +0000 UTC m=+211.013135629" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.434872 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" podStartSLOduration=176.434856618 podStartE2EDuration="2m56.434856618s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.432944576 +0000 UTC m=+211.052003069" watchObservedRunningTime="2026-02-24 14:52:29.434856618 +0000 UTC m=+211.053915111" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.461113 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:29 crc kubenswrapper[4982]: E0224 14:52:29.461526 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:29.961512267 +0000 UTC m=+211.580570770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.474760 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nktkq" podStartSLOduration=6.474743065 podStartE2EDuration="6.474743065s" podCreationTimestamp="2026-02-24 14:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:29.474177169 +0000 UTC m=+211.093235662" watchObservedRunningTime="2026-02-24 14:52:29.474743065 +0000 UTC m=+211.093801558" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.562195 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:29 crc kubenswrapper[4982]: E0224 14:52:29.562460 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.062446933 +0000 UTC m=+211.681505426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.673244 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:29 crc kubenswrapper[4982]: E0224 14:52:29.673615 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.173602926 +0000 UTC m=+211.792661419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.783086 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:29 crc kubenswrapper[4982]: E0224 14:52:29.783736 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.283716769 +0000 UTC m=+211.902775262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.872880 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:29 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:29 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:29 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.872929 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:29 crc kubenswrapper[4982]: I0224 14:52:29.892675 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:29 crc kubenswrapper[4982]: E0224 14:52:29.892998 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.39298661 +0000 UTC m=+212.012045103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.003645 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.004451 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.50443535 +0000 UTC m=+212.123493843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.105211 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.105519 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.605491059 +0000 UTC m=+212.224549552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.144846 4982 ???:1] "http: TLS handshake error from 192.168.126.11:33458: no serving certificate available for the kubelet" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.172344 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9x558" event={"ID":"e2b79e66-39f3-40e8-ad1d-cdd963a10983","Type":"ContainerStarted","Data":"ba5aa91e96dcaac57d085b20f7fc3ded01666ab158be265cb8c95a5e92a33b1f"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.178776 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nktkq" event={"ID":"f2a4ed1c-cb21-46fd-aa9b-7dbae98b51df","Type":"ContainerStarted","Data":"29672ee00210f2a09bdc1231ba9fe94356bdbd8d9347ccfbe067b58ebc77ce3c"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.181321 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vfkh9" event={"ID":"e89e8fcd-f436-485c-bc82-18e06f222400","Type":"ContainerStarted","Data":"52a60441985da4d39ac65075e230ce2ddbac70f86d1e3bf68ab7d2df6644dd67"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.181339 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vfkh9" event={"ID":"e89e8fcd-f436-485c-bc82-18e06f222400","Type":"ContainerStarted","Data":"62084ee83dea5f6edac7b0e09d9e3e280c030aa341d6088183cd8ae848fbd139"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.184132 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" event={"ID":"e490bbd4-3eec-4547-9cab-b43ea88e0377","Type":"ContainerStarted","Data":"20920f57fb01daeaede8215171c35ad6d1d67bd7f92361c35af0ccbabfcf5c81"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.184150 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" event={"ID":"e490bbd4-3eec-4547-9cab-b43ea88e0377","Type":"ContainerStarted","Data":"5b9044034d3ab816c31d0f4cac422c1abd01085158605205fc2b817e1bca15bf"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.192133 4982 generic.go:334] "Generic (PLEG): container finished" podID="a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7" containerID="2d0b3c9f10960a9b0b27ceadb0c5137a2007999777101aa47ed14c823b7e76cb" exitCode=0 Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.192187 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" event={"ID":"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7","Type":"ContainerDied","Data":"2d0b3c9f10960a9b0b27ceadb0c5137a2007999777101aa47ed14c823b7e76cb"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.195512 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" event={"ID":"f53d5626-04b0-455d-a4bc-96207b51b221","Type":"ContainerStarted","Data":"1d2579dd9b7e02b3aef872272d106f778f85794228f6b296f95a05a9ed570d36"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.196478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" event={"ID":"15a071cd-05cc-4acb-a093-93e526224c69","Type":"ContainerStarted","Data":"f868b3b613af03e9a1db921306b5dae31e1276dbdb25321a08463f24ba20db14"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.197160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" event={"ID":"08265db9-8c1f-47c9-b812-324889b64e93","Type":"ContainerStarted","Data":"24c1b8a95acecd48e65608524848ec533b120fb6b95cbc5dcaa62998000cfa30"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.206530 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.208153 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.708135341 +0000 UTC m=+212.327193834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.213261 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-md6w5" event={"ID":"f4e9c817-5d2a-4d09-aaed-54b8a3735c25","Type":"ContainerStarted","Data":"831098405cb07a6712a722dd7b12ed31e777a3055f786dd3168aff7ce02f8360"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.221124 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" event={"ID":"21716abb-b37d-43a7-bf98-2af519ada148","Type":"ContainerStarted","Data":"c074fca9662a9c52bc0add003eb34b4e29d9e64ea1922ec43f82a88547f13640"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.221174 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" event={"ID":"21716abb-b37d-43a7-bf98-2af519ada148","Type":"ContainerStarted","Data":"3919ec3ba1d9198564714b974cd47305e44f867cbf31e59786dc6fc8149d279f"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.221895 4982 ???:1] "http: TLS handshake error from 192.168.126.11:33466: no serving certificate available for the kubelet" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.223320 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nsv6c" event={"ID":"dfdbf1b1-2b07-4bff-ab32-759a436b4a78","Type":"ContainerStarted","Data":"1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.228888 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" event={"ID":"4588b05d-3afb-4e0f-881a-426d13afff5c","Type":"ContainerStarted","Data":"ffc75aa2cbc14c7f382931cd3acba8e76cfa174ab55a52ca3d5924c4821bd019"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.234837 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" event={"ID":"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd","Type":"ContainerStarted","Data":"8792090edab40f7e1351f2624ad3bb3044f726d710dcab952e1eb4cbd705cc2b"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.239467 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nq62b" event={"ID":"217b0f77-c367-4c25-9965-051d89a335e1","Type":"ContainerStarted","Data":"1b89e00ec66ca1c4c3ef34d4e9ef2a5376c69bd6d17fa55098ebd464d879f6da"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.240328 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.241526 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" event={"ID":"22ced552-66a9-4936-8c25-3e3e8734de79","Type":"ContainerStarted","Data":"327cc24c8c3f08ade19f4d7c459399ab13b74d2a112cede458955e070bb32e29"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.242526 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" event={"ID":"41b9348a-b44f-4ecf-9043-0948b992d64e","Type":"ContainerStarted","Data":"5a6535ad86758c252192ca79eb05858dd24068e612b50fdfde61e7be82b81c7d"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.252795 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" event={"ID":"41974777-d629-460d-b0b1-bdd82afb34d4","Type":"ContainerStarted","Data":"1f497a12a058991acb650b56e80cfc8eda389559a978bc0d1c795bf683eb284e"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.252843 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" event={"ID":"41974777-d629-460d-b0b1-bdd82afb34d4","Type":"ContainerStarted","Data":"84707dceb238fd8ef0e01300c9d743f4b91b38379667e8bdc21cf0e455467152"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.254648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" event={"ID":"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080","Type":"ContainerStarted","Data":"5a83f5a769c26d58b32022261f923eade6d061046fb2850ed786fbcec453780a"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.260313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" event={"ID":"7f7b6b2b-459d-44d3-96ab-b798b81342dc","Type":"ContainerStarted","Data":"b79f3d8b39fda98123862dbf5337a9f816667b9d81b02382c8b676aa792fcdcc"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.260358 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" event={"ID":"7f7b6b2b-459d-44d3-96ab-b798b81342dc","Type":"ContainerStarted","Data":"028371ab313fa0838b025320a096992dddc9914cce9dfc5c9eb612e863c2b085"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.261448 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.265333 4982 patch_prober.go:28] interesting pod/console-operator-58897d9998-nq62b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.265377 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nq62b" podUID="217b0f77-c367-4c25-9965-051d89a335e1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.277218 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" event={"ID":"93fe75c8-ce81-4489-a557-db6b117d6079","Type":"ContainerStarted","Data":"5e6f80de85bcd0fcf553026e0dccbee85fe18549a5ec28bb68dbf44b349f95c7"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.277263 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" event={"ID":"93fe75c8-ce81-4489-a557-db6b117d6079","Type":"ContainerStarted","Data":"8dded77c3e8071a153bdfddf5ec154ada677d0bd941f474a1eeac2eca6a9fa3a"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.281317 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ndstv" event={"ID":"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f","Type":"ContainerStarted","Data":"ebf14b25b085c07c681c525c3d7d7a260ed6189e4ecf9d58d377111d189b8f82"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.291432 4982 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tlx7h container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.291480 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" podUID="7f7b6b2b-459d-44d3-96ab-b798b81342dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.318013 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.325757 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.825731286 +0000 UTC m=+212.444789779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.327785 4982 ???:1] "http: TLS handshake error from 192.168.126.11:33482: no serving certificate available for the kubelet" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.335867 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qdr8x" event={"ID":"259e2d88-6dda-4a11-b71b-da8eb015e022","Type":"ContainerStarted","Data":"7d253fc0593f152a3fca421f3c5c0643d98b15128f13a29b2a2ceb183fa5cc5f"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.345814 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" event={"ID":"d4d35181-5fae-443f-acb5-bcfa63181ce7","Type":"ContainerStarted","Data":"b1844aeba885090a36ad78fbf1de6bab7d7904f559f0781df4ef942aa9d70ef1"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.345872 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" event={"ID":"d4d35181-5fae-443f-acb5-bcfa63181ce7","Type":"ContainerStarted","Data":"8f62edb0cc3d9ba37f4ccec78760f71b5c7f09b09a43e590b22fe06ca8bf82d4"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.348395 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" event={"ID":"26ef06ba-5fad-49aa-a281-5e674a6f6f39","Type":"ContainerStarted","Data":"19c9bdec445340b8a8a3068a67d7a7a70e5cf379c3aa9de0ffdd6b3c60bfd64d"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.399064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-lt5dx" event={"ID":"1a3fdfb4-f4ea-401e-a08e-e8c300ba64cd","Type":"ContainerStarted","Data":"b096885c1a62392f8fa76a84f57391d696c75ba813d92bb1f3626f988aaa303a"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.419529 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.420897 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:30.920875946 +0000 UTC m=+212.539934429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.463589 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s7g2b" podStartSLOduration=176.463561439 podStartE2EDuration="2m56.463561439s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.454035182 +0000 UTC m=+212.073093675" watchObservedRunningTime="2026-02-24 14:52:30.463561439 +0000 UTC m=+212.082619932" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.481419 4982 ???:1] "http: TLS handshake error from 192.168.126.11:33494: no serving certificate available for the kubelet" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.491486 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vfkh9" podStartSLOduration=7.491469643 podStartE2EDuration="7.491469643s" podCreationTimestamp="2026-02-24 14:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.49100752 +0000 UTC m=+212.110066003" watchObservedRunningTime="2026-02-24 14:52:30.491469643 +0000 UTC m=+212.110528136" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.520277 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" event={"ID":"56d7389c-7094-46ef-ab67-46931e31a6a4","Type":"ContainerStarted","Data":"eb166d9d256bb51f7aebc2f3cffc14de98a207f9aade906d4390b201c67fc0a5"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.540376 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nq62b" podStartSLOduration=177.540356073 podStartE2EDuration="2m57.540356073s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.536268763 +0000 UTC m=+212.155327256" watchObservedRunningTime="2026-02-24 14:52:30.540356073 +0000 UTC m=+212.159414566" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.556456 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.558071 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.058057811 +0000 UTC m=+212.677116304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.572983 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" event={"ID":"8dd5c785-b167-4b52-8c16-4eea0fcb5685","Type":"ContainerStarted","Data":"704ab0b512dded89de00a01d136ed19a1e67863da3ed0a30429006366e88c6c9"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.600949 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" event={"ID":"3fcaab52-6a23-416e-a584-8aa43c11ecef","Type":"ContainerStarted","Data":"8d42876bae246279478c2db026dd5955752a8f89594d6ae61d3fe37acc451aa4"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.603076 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" event={"ID":"49787c69-3eda-42e4-92d7-9770d605b4e7","Type":"ContainerStarted","Data":"c5aeecfda0079d0c69ac26fbfd803464a891118c8c5b055266251d5e4a25cd71"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.603101 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" event={"ID":"49787c69-3eda-42e4-92d7-9770d605b4e7","Type":"ContainerStarted","Data":"9916550c7b332f6247d93295a9716f56d44548b876f8e16b595fac5bca58046d"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.606448 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" event={"ID":"4367d9de-2fce-45c8-b354-fad4122e7eef","Type":"ContainerStarted","Data":"a9e2da5044315e94b3383e7dfa81db6750cb808d7e9a7e07b1f51452ad47999a"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.607844 4982 ???:1] "http: TLS handshake error from 192.168.126.11:33504: no serving certificate available for the kubelet" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.612580 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" event={"ID":"1f0409dd-6f2c-489e-820f-52019dbb3e0c","Type":"ContainerStarted","Data":"b3512d4c3d02d02c7da3f0d356d4b3fe073f3289eb0c1ac2c730638374ed77fe"} Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.620727 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-htd7x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.620792 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-htd7x" podUID="e0fc969b-5b42-4649-9109-d049431cae47" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.634827 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.636126 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pw68d" podStartSLOduration=177.636103349 podStartE2EDuration="2m57.636103349s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.633622862 +0000 UTC m=+212.252681355" watchObservedRunningTime="2026-02-24 14:52:30.636103349 +0000 UTC m=+212.255161842" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.636421 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" podStartSLOduration=176.636415067 podStartE2EDuration="2m56.636415067s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.592527192 +0000 UTC m=+212.211585685" watchObservedRunningTime="2026-02-24 14:52:30.636415067 +0000 UTC m=+212.255473560" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.661381 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.661870 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.161849214 +0000 UTC m=+212.780907707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.674697 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7l74l" podStartSLOduration=177.674676561 podStartE2EDuration="2m57.674676561s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.673040566 +0000 UTC m=+212.292099049" watchObservedRunningTime="2026-02-24 14:52:30.674676561 +0000 UTC m=+212.293735054" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.775353 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.782850 4982 ???:1] "http: TLS handshake error from 192.168.126.11:33506: no serving certificate available for the kubelet" Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.785737 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.28572008 +0000 UTC m=+212.904778563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.813236 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" podStartSLOduration=176.813216912 podStartE2EDuration="2m56.813216912s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.755457072 +0000 UTC m=+212.374515565" watchObservedRunningTime="2026-02-24 14:52:30.813216912 +0000 UTC m=+212.432275405" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.875843 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:30 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:30 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:30 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.875909 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.876265 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.876657 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.376643375 +0000 UTC m=+212.995701868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.891410 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" podStartSLOduration=177.891390503 podStartE2EDuration="2m57.891390503s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.89090058 +0000 UTC m=+212.509959073" watchObservedRunningTime="2026-02-24 14:52:30.891390503 +0000 UTC m=+212.510448996" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.892083 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6dkb" podStartSLOduration=176.892075412 podStartE2EDuration="2m56.892075412s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.814511977 +0000 UTC m=+212.433570480" watchObservedRunningTime="2026-02-24 14:52:30.892075412 +0000 UTC m=+212.511133905" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.953219 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.963052 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-c5jz8" podStartSLOduration=176.963037309 podStartE2EDuration="2m56.963037309s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:30.961848106 +0000 UTC m=+212.580906599" watchObservedRunningTime="2026-02-24 14:52:30.963037309 +0000 UTC m=+212.582095802" Feb 24 14:52:30 crc kubenswrapper[4982]: I0224 14:52:30.980365 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:30 crc kubenswrapper[4982]: E0224 14:52:30.980687 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.480676894 +0000 UTC m=+213.099735387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.071898 4982 ???:1] "http: TLS handshake error from 192.168.126.11:33510: no serving certificate available for the kubelet" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.081093 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.081466 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.581446976 +0000 UTC m=+213.200505469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.182330 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.182648 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.682635858 +0000 UTC m=+213.301694351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.198028 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.198059 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.282892 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.283170 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.783144593 +0000 UTC m=+213.402203076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.283595 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.283917 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.783905744 +0000 UTC m=+213.402964237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.385013 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.385368 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.885351803 +0000 UTC m=+213.504410296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.477313 4982 ???:1] "http: TLS handshake error from 192.168.126.11:33520: no serving certificate available for the kubelet" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.486764 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.487135 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:31.987111862 +0000 UTC m=+213.606170355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.536467 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.591155 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.591335 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.091309395 +0000 UTC m=+213.710367878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.591841 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.592124 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.092115627 +0000 UTC m=+213.711174120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.636912 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" event={"ID":"08265db9-8c1f-47c9-b812-324889b64e93","Type":"ContainerStarted","Data":"95ecaa2f596755b3a70e8ae61aae0f6a662413aa0a4a30fe94477c31776e1a5b"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.668544 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhxbw" podStartSLOduration=178.66852465 podStartE2EDuration="2m58.66852465s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.667897264 +0000 UTC m=+213.286955767" watchObservedRunningTime="2026-02-24 14:52:31.66852465 +0000 UTC m=+213.287583143" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.681065 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" event={"ID":"49787c69-3eda-42e4-92d7-9770d605b4e7","Type":"ContainerStarted","Data":"2799d1767159f6e640fea13d3e5dc8a6d916e0b00badf500dbe04df352a67ff7"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.692612 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.693392 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.193365001 +0000 UTC m=+213.812423494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.695131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" event={"ID":"41974777-d629-460d-b0b1-bdd82afb34d4","Type":"ContainerStarted","Data":"b572fd1d3fe0087d9badea743f38af9b6343a2a7cebb756372f8a14cb754bae0"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.709563 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jjpjb" podStartSLOduration=178.709547289 podStartE2EDuration="2m58.709547289s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.708943952 +0000 UTC m=+213.328002445" watchObservedRunningTime="2026-02-24 14:52:31.709547289 +0000 UTC m=+213.328605782" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.710437 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" event={"ID":"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7","Type":"ContainerStarted","Data":"11c50f4ef973b303cbf5cd37102b4c26dc0c00e7e1db979fa08ffc90cc993843"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.739943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" event={"ID":"4588b05d-3afb-4e0f-881a-426d13afff5c","Type":"ContainerStarted","Data":"8a7c1eec923de6d6fdcf6a3c2e822e35111f60aa1ad70a6528a665ab84af791e"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.742629 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ndstv" event={"ID":"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f","Type":"ContainerStarted","Data":"58620e6bfcf81cac1f0cfb3e72a1745e07f9772655eb1519d813a4462c69bee3"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.742659 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ndstv" event={"ID":"67a3e1e3-70ef-4fe0-b72c-f0156143ad4f","Type":"ContainerStarted","Data":"50a598221b3939b959a48b13fa23ce9f092652ac423d58d526ed6247d23bed41"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.743165 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.763243 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vkq7q" podStartSLOduration=178.763225628 podStartE2EDuration="2m58.763225628s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.75810282 +0000 UTC m=+213.377161313" watchObservedRunningTime="2026-02-24 14:52:31.763225628 +0000 UTC m=+213.382284121" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.772210 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" event={"ID":"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080","Type":"ContainerStarted","Data":"ec5b8a65e094ab700ff24c9c4b7c02590ca4b6396b7013c10ca299a9251bc21e"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.794548 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.794902 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.294889364 +0000 UTC m=+213.913947857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.800273 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" event={"ID":"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd","Type":"ContainerStarted","Data":"1ff30d486c07fb0d262eae9e14822667822d7b069406949f58abaac427db01fd"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.801016 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.802468 4982 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zv2x7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.802565 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" podUID="34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.812537 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ndrwf" podStartSLOduration=178.812478918 podStartE2EDuration="2m58.812478918s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.796659051 +0000 UTC m=+213.415717534" watchObservedRunningTime="2026-02-24 14:52:31.812478918 +0000 UTC m=+213.431537411" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.823860 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tsz6d" event={"ID":"4367d9de-2fce-45c8-b354-fad4122e7eef","Type":"ContainerStarted","Data":"a627bdbe06ce8e500be40239333494365ab6ecec8bf20b2f061be6ffd58390ef"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.831518 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ndstv" podStartSLOduration=8.831478592 podStartE2EDuration="8.831478592s" podCreationTimestamp="2026-02-24 14:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.830265609 +0000 UTC m=+213.449324102" watchObservedRunningTime="2026-02-24 14:52:31.831478592 +0000 UTC m=+213.450537085" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.839376 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" event={"ID":"1f0409dd-6f2c-489e-820f-52019dbb3e0c","Type":"ContainerStarted","Data":"fa067dbde0817e697ea686a4b4362217e9b13ea1da36330205516f23f8c685c9"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.854338 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" podStartSLOduration=178.854308398 podStartE2EDuration="2m58.854308398s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.85362258 +0000 UTC m=+213.472681073" watchObservedRunningTime="2026-02-24 14:52:31.854308398 +0000 UTC m=+213.473366891" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.883868 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:31 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:31 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:31 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.883970 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.884566 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" event={"ID":"22ced552-66a9-4936-8c25-3e3e8734de79","Type":"ContainerStarted","Data":"efaf75289db577ea06a0a9177ae0a0d621f0a9ce91070407ab57f00b3e44b3ff"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.885164 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.892648 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" podStartSLOduration=177.892627122 podStartE2EDuration="2m57.892627122s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.890611608 +0000 UTC m=+213.509670101" watchObservedRunningTime="2026-02-24 14:52:31.892627122 +0000 UTC m=+213.511685615" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.895631 4982 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8b9r9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.895687 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" podUID="22ced552-66a9-4936-8c25-3e3e8734de79" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.896731 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:31 crc kubenswrapper[4982]: E0224 14:52:31.898317 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.398298707 +0000 UTC m=+214.017357200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.916133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" event={"ID":"26ef06ba-5fad-49aa-a281-5e674a6f6f39","Type":"ContainerStarted","Data":"9bc3ae4150b6e3156f400598e5a9d16e919654f3ccd13d3632f58ba9531d1550"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.916188 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" event={"ID":"26ef06ba-5fad-49aa-a281-5e674a6f6f39","Type":"ContainerStarted","Data":"ad8f510a423dc84e5123d7a458785b5c99fc60ab85bc8ab374982b13b5536002"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.916979 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.931371 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" event={"ID":"15a071cd-05cc-4acb-a093-93e526224c69","Type":"ContainerStarted","Data":"ecde0533d3a2533083736505a8e1da14feb5a8b6e69f0b81e1f47fb648d312cd"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.931829 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" event={"ID":"15a071cd-05cc-4acb-a093-93e526224c69","Type":"ContainerStarted","Data":"c34c643f3fa746760d85581a713d9477e578ef55baeb0c840bb5a5c9c4760408"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.935375 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" event={"ID":"d4d35181-5fae-443f-acb5-bcfa63181ce7","Type":"ContainerStarted","Data":"67ba3688c6628112093d59a075dfeac3990fd4ec834b4098f6dd8dde2ecea04b"} Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.935827 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" podUID="b9d107c2-6c77-48a2-b2f6-328fb0d83afc" containerName="controller-manager" containerID="cri-o://8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd" gracePeriod=30 Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.941923 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" podStartSLOduration=177.941900184 podStartE2EDuration="2m57.941900184s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.939755856 +0000 UTC m=+213.558814349" watchObservedRunningTime="2026-02-24 14:52:31.941900184 +0000 UTC m=+213.560958677" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.942388 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" podUID="37b93651-de65-42d6-96b0-560298df3222" containerName="route-controller-manager" containerID="cri-o://88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7" gracePeriod=30 Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.944253 4982 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tlx7h container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.944309 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" podUID="7f7b6b2b-459d-44d3-96ab-b798b81342dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.957822 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hrmx" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.960031 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-htd7x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.960132 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-htd7x" podUID="e0fc969b-5b42-4649-9109-d049431cae47" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.976578 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jhn22" podStartSLOduration=177.976550519 podStartE2EDuration="2m57.976550519s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:31.975414759 +0000 UTC m=+213.594473252" watchObservedRunningTime="2026-02-24 14:52:31.976550519 +0000 UTC m=+213.595609012" Feb 24 14:52:31 crc kubenswrapper[4982]: I0224 14:52:31.998833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.002325 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.502302705 +0000 UTC m=+214.121361198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.101374 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.102566 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.602549462 +0000 UTC m=+214.221607955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.170149 4982 ???:1] "http: TLS handshake error from 192.168.126.11:53014: no serving certificate available for the kubelet" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.185366 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-clvbx" podStartSLOduration=178.185347049 podStartE2EDuration="2m58.185347049s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:32.087790774 +0000 UTC m=+213.706849267" watchObservedRunningTime="2026-02-24 14:52:32.185347049 +0000 UTC m=+213.804405542" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.185458 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" podStartSLOduration=178.185454672 podStartE2EDuration="2m58.185454672s" podCreationTimestamp="2026-02-24 14:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:32.1842806 +0000 UTC m=+213.803339133" watchObservedRunningTime="2026-02-24 14:52:32.185454672 +0000 UTC m=+213.804513165" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.207459 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.207896 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.707877747 +0000 UTC m=+214.326936300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.309831 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.310110 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.810093077 +0000 UTC m=+214.429151570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.411115 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.411487 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:32.911474655 +0000 UTC m=+214.530533148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.513328 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.514073 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.014058565 +0000 UTC m=+214.633117058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.518924 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.543817 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c2lr6" podStartSLOduration=179.543801419 podStartE2EDuration="2m59.543801419s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:32.218678298 +0000 UTC m=+213.837736791" watchObservedRunningTime="2026-02-24 14:52:32.543801419 +0000 UTC m=+214.162859912" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.562617 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx"] Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.562825 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b93651-de65-42d6-96b0-560298df3222" containerName="route-controller-manager" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.562840 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b93651-de65-42d6-96b0-560298df3222" containerName="route-controller-manager" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.562951 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b93651-de65-42d6-96b0-560298df3222" containerName="route-controller-manager" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.563304 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.588392 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx"] Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.609051 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616334 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-config\") pod \"37b93651-de65-42d6-96b0-560298df3222\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616375 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-proxy-ca-bundles\") pod \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616401 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5gh6\" (UniqueName: \"kubernetes.io/projected/37b93651-de65-42d6-96b0-560298df3222-kube-api-access-v5gh6\") pod \"37b93651-de65-42d6-96b0-560298df3222\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616421 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-client-ca\") pod \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616439 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-serving-cert\") pod \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616519 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-config\") pod \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616553 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-client-ca\") pod \"37b93651-de65-42d6-96b0-560298df3222\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616566 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b93651-de65-42d6-96b0-560298df3222-serving-cert\") pod \"37b93651-de65-42d6-96b0-560298df3222\" (UID: \"37b93651-de65-42d6-96b0-560298df3222\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616597 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgfm8\" (UniqueName: \"kubernetes.io/projected/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-kube-api-access-kgfm8\") pod \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\" (UID: \"b9d107c2-6c77-48a2-b2f6-328fb0d83afc\") " Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616748 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-client-ca\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616806 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616823 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-config\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616841 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxc2d\" (UniqueName: \"kubernetes.io/projected/8bc928ae-a991-4b6c-80d9-77b0c336a95f-kube-api-access-pxc2d\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.616866 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc928ae-a991-4b6c-80d9-77b0c336a95f-serving-cert\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.617784 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-client-ca" (OuterVolumeSpecName: "client-ca") pod "37b93651-de65-42d6-96b0-560298df3222" (UID: "37b93651-de65-42d6-96b0-560298df3222"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.618092 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-config" (OuterVolumeSpecName: "config") pod "37b93651-de65-42d6-96b0-560298df3222" (UID: "37b93651-de65-42d6-96b0-560298df3222"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.619671 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9d107c2-6c77-48a2-b2f6-328fb0d83afc" (UID: "b9d107c2-6c77-48a2-b2f6-328fb0d83afc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.619992 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.119980426 +0000 UTC m=+214.739038919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.620351 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-config" (OuterVolumeSpecName: "config") pod "b9d107c2-6c77-48a2-b2f6-328fb0d83afc" (UID: "b9d107c2-6c77-48a2-b2f6-328fb0d83afc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.620837 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b9d107c2-6c77-48a2-b2f6-328fb0d83afc" (UID: "b9d107c2-6c77-48a2-b2f6-328fb0d83afc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.624592 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-kube-api-access-kgfm8" (OuterVolumeSpecName: "kube-api-access-kgfm8") pod "b9d107c2-6c77-48a2-b2f6-328fb0d83afc" (UID: "b9d107c2-6c77-48a2-b2f6-328fb0d83afc"). InnerVolumeSpecName "kube-api-access-kgfm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.628072 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b93651-de65-42d6-96b0-560298df3222-kube-api-access-v5gh6" (OuterVolumeSpecName: "kube-api-access-v5gh6") pod "37b93651-de65-42d6-96b0-560298df3222" (UID: "37b93651-de65-42d6-96b0-560298df3222"). InnerVolumeSpecName "kube-api-access-v5gh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.629044 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9d107c2-6c77-48a2-b2f6-328fb0d83afc" (UID: "b9d107c2-6c77-48a2-b2f6-328fb0d83afc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.635981 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b93651-de65-42d6-96b0-560298df3222-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "37b93651-de65-42d6-96b0-560298df3222" (UID: "37b93651-de65-42d6-96b0-560298df3222"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.718566 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.718671 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.2186525 +0000 UTC m=+214.837710993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719141 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719176 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-config\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719199 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxc2d\" (UniqueName: \"kubernetes.io/projected/8bc928ae-a991-4b6c-80d9-77b0c336a95f-kube-api-access-pxc2d\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719232 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc928ae-a991-4b6c-80d9-77b0c336a95f-serving-cert\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719310 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-client-ca\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719356 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719371 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719384 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b93651-de65-42d6-96b0-560298df3222-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719398 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgfm8\" (UniqueName: \"kubernetes.io/projected/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-kube-api-access-kgfm8\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719408 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b93651-de65-42d6-96b0-560298df3222-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719418 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719428 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5gh6\" (UniqueName: \"kubernetes.io/projected/37b93651-de65-42d6-96b0-560298df3222-kube-api-access-v5gh6\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719440 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.719451 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9d107c2-6c77-48a2-b2f6-328fb0d83afc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.720671 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-client-ca\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.720926 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.220914132 +0000 UTC m=+214.839972625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.721789 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-config\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.734521 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc928ae-a991-4b6c-80d9-77b0c336a95f-serving-cert\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.749227 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxc2d\" (UniqueName: \"kubernetes.io/projected/8bc928ae-a991-4b6c-80d9-77b0c336a95f-kube-api-access-pxc2d\") pod \"route-controller-manager-54f7795d4-kz5hx\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.820351 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.820553 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.320526112 +0000 UTC m=+214.939584605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.820641 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.820994 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.320986535 +0000 UTC m=+214.940045028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.868470 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:32 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:32 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:32 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.868575 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.922101 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:32 crc kubenswrapper[4982]: E0224 14:52:32.922380 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.422364972 +0000 UTC m=+215.041423465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.922509 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.938784 4982 patch_prober.go:28] interesting pod/console-operator-58897d9998-nq62b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.938835 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nq62b" podUID="217b0f77-c367-4c25-9965-051d89a335e1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.960047 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" event={"ID":"3fcaab52-6a23-416e-a584-8aa43c11ecef","Type":"ContainerStarted","Data":"69a5f1487c0367e4efdcb26d3a2b1fcc79f989a84b411c00ec1fd5722f1c3716"} Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.964210 4982 generic.go:334] "Generic (PLEG): container finished" podID="b9d107c2-6c77-48a2-b2f6-328fb0d83afc" containerID="8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd" exitCode=0 Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.964295 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.964594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" event={"ID":"b9d107c2-6c77-48a2-b2f6-328fb0d83afc","Type":"ContainerDied","Data":"8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd"} Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.964677 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-l5b9h" event={"ID":"b9d107c2-6c77-48a2-b2f6-328fb0d83afc","Type":"ContainerDied","Data":"2aa4ce3b74868284a8fd1f9da5f7df01cc1852314306dfa8b35c3c1e36f73896"} Feb 24 14:52:32 crc kubenswrapper[4982]: I0224 14:52:32.964695 4982 scope.go:117] "RemoveContainer" containerID="8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.000080 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l5b9h"] Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.001961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" event={"ID":"a33527e4-ee5d-4ee2-a2df-b3ab5c2231c7","Type":"ContainerStarted","Data":"d054715105ef4434961d3fdecb9d004b60d0eecfc155efc1bb9e8d335679586f"} Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.006022 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-l5b9h"] Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.024252 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.024633 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.524616414 +0000 UTC m=+215.143674907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.040694 4982 generic.go:334] "Generic (PLEG): container finished" podID="2e396c6a-3142-4ff9-ad15-ed0f7bbe1080" containerID="ec5b8a65e094ab700ff24c9c4b7c02590ca4b6396b7013c10ca299a9251bc21e" exitCode=0 Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.040759 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" event={"ID":"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080","Type":"ContainerDied","Data":"ec5b8a65e094ab700ff24c9c4b7c02590ca4b6396b7013c10ca299a9251bc21e"} Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.042704 4982 generic.go:334] "Generic (PLEG): container finished" podID="37b93651-de65-42d6-96b0-560298df3222" containerID="88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7" exitCode=0 Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.043269 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.050692 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" event={"ID":"37b93651-de65-42d6-96b0-560298df3222","Type":"ContainerDied","Data":"88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7"} Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.050736 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz" event={"ID":"37b93651-de65-42d6-96b0-560298df3222","Type":"ContainerDied","Data":"b26bb774a843761d909fb183ebf3bec2b7f0a48da46625d75bc3a8ac07086d24"} Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.053743 4982 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zv2x7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.053840 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" podUID="34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.064244 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tlx7h" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.066749 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" podStartSLOduration=180.066736591 podStartE2EDuration="3m0.066736591s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:33.058019056 +0000 UTC m=+214.677077549" watchObservedRunningTime="2026-02-24 14:52:33.066736591 +0000 UTC m=+214.685795084" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.067748 4982 scope.go:117] "RemoveContainer" containerID="8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd" Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.068268 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd\": container with ID starting with 8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd not found: ID does not exist" containerID="8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.068302 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd"} err="failed to get container status \"8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd\": rpc error: code = NotFound desc = could not find container \"8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd\": container with ID starting with 8880ef35a92f096b68b5a444df5978428e2d4c1b0e5c292d8c6ed77334f988cd not found: ID does not exist" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.068326 4982 scope.go:117] "RemoveContainer" containerID="88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.114468 4982 scope.go:117] "RemoveContainer" containerID="88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7" Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.117144 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7\": container with ID starting with 88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7 not found: ID does not exist" containerID="88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.117206 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7"} err="failed to get container status \"88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7\": rpc error: code = NotFound desc = could not find container \"88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7\": container with ID starting with 88e6f70b4ce12e582e1e14d697f0c09df2846298f572e0192e39732fed2d63f7 not found: ID does not exist" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.127367 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.128678 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.628652893 +0000 UTC m=+215.247711386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.170567 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9d107c2-6c77-48a2-b2f6-328fb0d83afc" path="/var/lib/kubelet/pods/b9d107c2-6c77-48a2-b2f6-328fb0d83afc/volumes" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.229352 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.235139 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.735118108 +0000 UTC m=+215.354176601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.253637 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nq62b" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.253693 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz"] Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.314850 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rpsqz"] Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.332250 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.333936 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.833916457 +0000 UTC m=+215.452974950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.337902 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.837887454 +0000 UTC m=+215.456945937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.337612 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.442577 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.442835 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:33.942820068 +0000 UTC m=+215.561878561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.491392 4982 ???:1] "http: TLS handshake error from 192.168.126.11:53018: no serving certificate available for the kubelet" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.538596 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.544486 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.544809 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.044794201 +0000 UTC m=+215.663852694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.645181 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.645565 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.145547922 +0000 UTC m=+215.764606415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.645790 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.646403 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.146377835 +0000 UTC m=+215.765436328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.661910 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx"] Feb 24 14:52:33 crc kubenswrapper[4982]: W0224 14:52:33.698524 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bc928ae_a991_4b6c_80d9_77b0c336a95f.slice/crio-6fff53feceb6b4acde575bf50b39162330b5e1d0074f1ce19aab1ea7c6a57f43 WatchSource:0}: Error finding container 6fff53feceb6b4acde575bf50b39162330b5e1d0074f1ce19aab1ea7c6a57f43: Status 404 returned error can't find the container with id 6fff53feceb6b4acde575bf50b39162330b5e1d0074f1ce19aab1ea7c6a57f43 Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.747590 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.747779 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.247753683 +0000 UTC m=+215.866812176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.747887 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.748205 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.248191665 +0000 UTC m=+215.867250158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.848941 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.849491 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.34947647 +0000 UTC m=+215.968534963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.868478 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:33 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:33 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:33 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.868550 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:33 crc kubenswrapper[4982]: I0224 14:52:33.950532 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:33 crc kubenswrapper[4982]: E0224 14:52:33.950870 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.450852248 +0000 UTC m=+216.069910731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.025211 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vndg2"] Feb 24 14:52:34 crc kubenswrapper[4982]: E0224 14:52:34.025409 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d107c2-6c77-48a2-b2f6-328fb0d83afc" containerName="controller-manager" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.025421 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d107c2-6c77-48a2-b2f6-328fb0d83afc" containerName="controller-manager" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.025547 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d107c2-6c77-48a2-b2f6-328fb0d83afc" containerName="controller-manager" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.027674 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.030015 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.043717 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vndg2"] Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.051221 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:34 crc kubenswrapper[4982]: E0224 14:52:34.051368 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.551351122 +0000 UTC m=+216.170409615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.051432 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsqb9\" (UniqueName: \"kubernetes.io/projected/ddc9427b-029e-49c4-bce0-b2d40b9259c8-kube-api-access-lsqb9\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.051461 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.051542 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-utilities\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.051675 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-catalog-content\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: E0224 14:52:34.051716 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.551708912 +0000 UTC m=+216.170767405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.057185 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" event={"ID":"8bc928ae-a991-4b6c-80d9-77b0c336a95f","Type":"ContainerStarted","Data":"87ba11e2c159808618a7cb9aeaabb4fca49d8bef0ec47bc483c2ef44931eb9b8"} Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.057235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" event={"ID":"8bc928ae-a991-4b6c-80d9-77b0c336a95f","Type":"ContainerStarted","Data":"6fff53feceb6b4acde575bf50b39162330b5e1d0074f1ce19aab1ea7c6a57f43"} Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.057397 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.060546 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" event={"ID":"3fcaab52-6a23-416e-a584-8aa43c11ecef","Type":"ContainerStarted","Data":"bc9b3c131712952cb3c9eb4a90932be45f5d8ec7f9da40a97dd715549b412395"} Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.072629 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.085398 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" podStartSLOduration=5.085385542 podStartE2EDuration="5.085385542s" podCreationTimestamp="2026-02-24 14:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:34.083788848 +0000 UTC m=+215.702847331" watchObservedRunningTime="2026-02-24 14:52:34.085385542 +0000 UTC m=+215.704444035" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.120269 4982 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.153488 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:34 crc kubenswrapper[4982]: E0224 14:52:34.153601 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.653558762 +0000 UTC m=+216.272617255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.153691 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-catalog-content\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.154078 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsqb9\" (UniqueName: \"kubernetes.io/projected/ddc9427b-029e-49c4-bce0-b2d40b9259c8-kube-api-access-lsqb9\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.154101 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.154383 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-catalog-content\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.154471 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-utilities\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.157203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-utilities\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: E0224 14:52:34.157305 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.657289113 +0000 UTC m=+216.276347596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.200798 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsqb9\" (UniqueName: \"kubernetes.io/projected/ddc9427b-029e-49c4-bce0-b2d40b9259c8-kube-api-access-lsqb9\") pod \"community-operators-vndg2\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.226166 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stdjb"] Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.240436 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stdjb"] Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.240557 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.245635 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.255144 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.255403 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58nmx\" (UniqueName: \"kubernetes.io/projected/e41be673-ff4a-465b-a472-22f962fbf6ed-kube-api-access-58nmx\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.255462 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-utilities\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.255511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-catalog-content\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: E0224 14:52:34.255628 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.755594608 +0000 UTC m=+216.374653101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.338085 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.347679 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.356793 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkg6v\" (UniqueName: \"kubernetes.io/projected/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-kube-api-access-kkg6v\") pod \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.356866 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-secret-volume\") pod \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.356924 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-config-volume\") pod \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\" (UID: \"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080\") " Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.357034 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-utilities\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.357074 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-catalog-content\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.357094 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.357149 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58nmx\" (UniqueName: \"kubernetes.io/projected/e41be673-ff4a-465b-a472-22f962fbf6ed-kube-api-access-58nmx\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.358418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-utilities\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.358687 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-catalog-content\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.358974 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e396c6a-3142-4ff9-ad15-ed0f7bbe1080" (UID: "2e396c6a-3142-4ff9-ad15-ed0f7bbe1080"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:52:34 crc kubenswrapper[4982]: E0224 14:52:34.359080 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 14:52:34.859065502 +0000 UTC m=+216.478123995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n9wbq" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.361543 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e396c6a-3142-4ff9-ad15-ed0f7bbe1080" (UID: "2e396c6a-3142-4ff9-ad15-ed0f7bbe1080"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.374664 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-kube-api-access-kkg6v" (OuterVolumeSpecName: "kube-api-access-kkg6v") pod "2e396c6a-3142-4ff9-ad15-ed0f7bbe1080" (UID: "2e396c6a-3142-4ff9-ad15-ed0f7bbe1080"). InnerVolumeSpecName "kube-api-access-kkg6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.392335 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58nmx\" (UniqueName: \"kubernetes.io/projected/e41be673-ff4a-465b-a472-22f962fbf6ed-kube-api-access-58nmx\") pod \"certified-operators-stdjb\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.398242 4982 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-24T14:52:34.120292544Z","Handler":null,"Name":""} Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.413007 4982 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.413058 4982 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.418927 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qrnhb"] Feb 24 14:52:34 crc kubenswrapper[4982]: E0224 14:52:34.419111 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e396c6a-3142-4ff9-ad15-ed0f7bbe1080" containerName="collect-profiles" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.419123 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e396c6a-3142-4ff9-ad15-ed0f7bbe1080" containerName="collect-profiles" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.419231 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e396c6a-3142-4ff9-ad15-ed0f7bbe1080" containerName="collect-profiles" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.419845 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.439003 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrnhb"] Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.463419 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.464108 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-catalog-content\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.464181 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87l2z\" (UniqueName: \"kubernetes.io/projected/ef70a241-8f38-4315-a1c9-a6df74030a41-kube-api-access-87l2z\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.464206 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-utilities\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.464288 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.464305 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkg6v\" (UniqueName: \"kubernetes.io/projected/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-kube-api-access-kkg6v\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.464317 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.506819 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.565661 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.566823 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87l2z\" (UniqueName: \"kubernetes.io/projected/ef70a241-8f38-4315-a1c9-a6df74030a41-kube-api-access-87l2z\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.566866 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-utilities\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.566894 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.566967 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-catalog-content\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.567312 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-catalog-content\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.567755 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-utilities\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.582164 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.582230 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.601202 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87l2z\" (UniqueName: \"kubernetes.io/projected/ef70a241-8f38-4315-a1c9-a6df74030a41-kube-api-access-87l2z\") pod \"community-operators-qrnhb\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.623387 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n9wbq\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.624765 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bv7hr"] Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.625651 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.636615 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vndg2"] Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.640610 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bv7hr"] Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.763761 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.770003 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdb8f\" (UniqueName: \"kubernetes.io/projected/5355d669-4f87-48b6-b389-09f97979f9c6-kube-api-access-jdb8f\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.770054 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-utilities\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.770090 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-catalog-content\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.792738 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.811012 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.871312 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdb8f\" (UniqueName: \"kubernetes.io/projected/5355d669-4f87-48b6-b389-09f97979f9c6-kube-api-access-jdb8f\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.871365 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-utilities\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.871413 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-catalog-content\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.872048 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-catalog-content\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.872856 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-utilities\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.873030 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:34 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:34 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:34 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.873092 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.915556 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdb8f\" (UniqueName: \"kubernetes.io/projected/5355d669-4f87-48b6-b389-09f97979f9c6-kube-api-access-jdb8f\") pod \"certified-operators-bv7hr\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:34 crc kubenswrapper[4982]: I0224 14:52:34.965799 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.067758 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" event={"ID":"2e396c6a-3142-4ff9-ad15-ed0f7bbe1080","Type":"ContainerDied","Data":"5a83f5a769c26d58b32022261f923eade6d061046fb2850ed786fbcec453780a"} Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.067788 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a83f5a769c26d58b32022261f923eade6d061046fb2850ed786fbcec453780a" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.067853 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.091023 4982 generic.go:334] "Generic (PLEG): container finished" podID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerID="2d0db04461c42819fc1d3e6b8a792dcd6c3101ed9c802d2c91072e563b401934" exitCode=0 Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.091083 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vndg2" event={"ID":"ddc9427b-029e-49c4-bce0-b2d40b9259c8","Type":"ContainerDied","Data":"2d0db04461c42819fc1d3e6b8a792dcd6c3101ed9c802d2c91072e563b401934"} Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.091311 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vndg2" event={"ID":"ddc9427b-029e-49c4-bce0-b2d40b9259c8","Type":"ContainerStarted","Data":"b53af7dcdbb0904bd600bfae95ba1e99a9217c0a97efe1805ae746f835ff40dd"} Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.096847 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" event={"ID":"3fcaab52-6a23-416e-a584-8aa43c11ecef","Type":"ContainerStarted","Data":"0d3014c7c3a62f7f7d07bec102898205ca212025a6806f027b9f0ee5b2484fbe"} Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.096877 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" event={"ID":"3fcaab52-6a23-416e-a584-8aa43c11ecef","Type":"ContainerStarted","Data":"cb7bbd97dbc108a9a84dc1d77c21132043b78eb8efcf93682b5337e085ece40e"} Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.153291 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ffjpg" podStartSLOduration=12.153271221 podStartE2EDuration="12.153271221s" podCreationTimestamp="2026-02-24 14:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:35.147647619 +0000 UTC m=+216.766706112" watchObservedRunningTime="2026-02-24 14:52:35.153271221 +0000 UTC m=+216.772329714" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.163303 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b93651-de65-42d6-96b0-560298df3222" path="/var/lib/kubelet/pods/37b93651-de65-42d6-96b0-560298df3222/volumes" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.164122 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.164572 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9wbq"] Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.164745 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stdjb"] Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.169321 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8464f9666b-gf5ql"] Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.171822 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.175656 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.175948 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.176053 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.176694 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8464f9666b-gf5ql"] Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.177117 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.177178 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.177248 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.180458 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnrf6\" (UniqueName: \"kubernetes.io/projected/21f97c87-102b-4f07-a112-f60307bbad0a-kube-api-access-tnrf6\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.180539 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f97c87-102b-4f07-a112-f60307bbad0a-serving-cert\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.180610 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-client-ca\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.200697 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-proxy-ca-bundles\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.200834 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-config\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.204487 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.260435 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bv7hr"] Feb 24 14:52:35 crc kubenswrapper[4982]: W0224 14:52:35.284034 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5355d669_4f87_48b6_b389_09f97979f9c6.slice/crio-13aa4bc3b57191613c30ddf03fd1cb467d4a9d9ec0bbd9076d9042fe24b2d7e9 WatchSource:0}: Error finding container 13aa4bc3b57191613c30ddf03fd1cb467d4a9d9ec0bbd9076d9042fe24b2d7e9: Status 404 returned error can't find the container with id 13aa4bc3b57191613c30ddf03fd1cb467d4a9d9ec0bbd9076d9042fe24b2d7e9 Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.302004 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-config\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.302067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnrf6\" (UniqueName: \"kubernetes.io/projected/21f97c87-102b-4f07-a112-f60307bbad0a-kube-api-access-tnrf6\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.302090 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f97c87-102b-4f07-a112-f60307bbad0a-serving-cert\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.302987 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-client-ca\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.303273 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-proxy-ca-bundles\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.303538 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-config\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.304218 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-client-ca\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.305311 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-proxy-ca-bundles\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.309576 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f97c87-102b-4f07-a112-f60307bbad0a-serving-cert\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.318458 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnrf6\" (UniqueName: \"kubernetes.io/projected/21f97c87-102b-4f07-a112-f60307bbad0a-kube-api-access-tnrf6\") pod \"controller-manager-8464f9666b-gf5ql\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.412325 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrnhb"] Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.521767 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.869477 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:35 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:35 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:35 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:35 crc kubenswrapper[4982]: I0224 14:52:35.870041 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.006096 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8464f9666b-gf5ql"] Feb 24 14:52:36 crc kubenswrapper[4982]: W0224 14:52:36.031030 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f97c87_102b_4f07_a112_f60307bbad0a.slice/crio-3efa605ee971fe624968d8a8e2fc59cf363b395b1ef400dc294044aa1edfbf54 WatchSource:0}: Error finding container 3efa605ee971fe624968d8a8e2fc59cf363b395b1ef400dc294044aa1edfbf54: Status 404 returned error can't find the container with id 3efa605ee971fe624968d8a8e2fc59cf363b395b1ef400dc294044aa1edfbf54 Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.078818 4982 ???:1] "http: TLS handshake error from 192.168.126.11:53020: no serving certificate available for the kubelet" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.107639 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" event={"ID":"21f97c87-102b-4f07-a112-f60307bbad0a","Type":"ContainerStarted","Data":"3efa605ee971fe624968d8a8e2fc59cf363b395b1ef400dc294044aa1edfbf54"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.113186 4982 generic.go:334] "Generic (PLEG): container finished" podID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerID="99d907b308c1e59eb7463e404051762158bc0cca5a15b75c6c8e922179f7b54f" exitCode=0 Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.113259 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrnhb" event={"ID":"ef70a241-8f38-4315-a1c9-a6df74030a41","Type":"ContainerDied","Data":"99d907b308c1e59eb7463e404051762158bc0cca5a15b75c6c8e922179f7b54f"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.113288 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrnhb" event={"ID":"ef70a241-8f38-4315-a1c9-a6df74030a41","Type":"ContainerStarted","Data":"1f56214747703070887fde2eaf4f896ce332b5094398c4eef46acca4662ef93f"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.119258 4982 generic.go:334] "Generic (PLEG): container finished" podID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerID="b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5" exitCode=0 Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.119303 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stdjb" event={"ID":"e41be673-ff4a-465b-a472-22f962fbf6ed","Type":"ContainerDied","Data":"b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.119323 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stdjb" event={"ID":"e41be673-ff4a-465b-a472-22f962fbf6ed","Type":"ContainerStarted","Data":"cf5c23812bf17a113a04698647b6679bd2b67d08f371fb135c2107e05539d10d"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.122590 4982 generic.go:334] "Generic (PLEG): container finished" podID="5355d669-4f87-48b6-b389-09f97979f9c6" containerID="c2d1d35aa1ad6a3ec88ef0e26b0e927ab7bb46c2ae38b05771dafcf39c1a1501" exitCode=0 Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.122638 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv7hr" event={"ID":"5355d669-4f87-48b6-b389-09f97979f9c6","Type":"ContainerDied","Data":"c2d1d35aa1ad6a3ec88ef0e26b0e927ab7bb46c2ae38b05771dafcf39c1a1501"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.122662 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv7hr" event={"ID":"5355d669-4f87-48b6-b389-09f97979f9c6","Type":"ContainerStarted","Data":"13aa4bc3b57191613c30ddf03fd1cb467d4a9d9ec0bbd9076d9042fe24b2d7e9"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.125567 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" event={"ID":"f31bded6-f3d5-42b4-b479-3c01ce30e73a","Type":"ContainerStarted","Data":"1cffe8b5992aa18c0107bfd265e93a129477a436d344260e03ab3d38c895b4de"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.125846 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" event={"ID":"f31bded6-f3d5-42b4-b479-3c01ce30e73a","Type":"ContainerStarted","Data":"815feb4ea140d82d980ace17f3473d1e7352fdf54abd47bb7438f285d33805e9"} Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.125874 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.147310 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" podStartSLOduration=183.147293525 podStartE2EDuration="3m3.147293525s" podCreationTimestamp="2026-02-24 14:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:36.146926706 +0000 UTC m=+217.765985199" watchObservedRunningTime="2026-02-24 14:52:36.147293525 +0000 UTC m=+217.766352018" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.224350 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rzclt"] Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.226092 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.230089 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.246381 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzclt"] Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.326056 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-catalog-content\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.326117 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-utilities\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.326160 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwrf4\" (UniqueName: \"kubernetes.io/projected/6f482128-8f1f-43bc-b715-03049473f155-kube-api-access-rwrf4\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.402067 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-htd7x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.402083 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-htd7x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.402125 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-htd7x" podUID="e0fc969b-5b42-4649-9109-d049431cae47" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.402144 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-htd7x" podUID="e0fc969b-5b42-4649-9109-d049431cae47" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.427303 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwrf4\" (UniqueName: \"kubernetes.io/projected/6f482128-8f1f-43bc-b715-03049473f155-kube-api-access-rwrf4\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.427399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-catalog-content\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.427450 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-utilities\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.428082 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-utilities\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.428230 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-catalog-content\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.467888 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwrf4\" (UniqueName: \"kubernetes.io/projected/6f482128-8f1f-43bc-b715-03049473f155-kube-api-access-rwrf4\") pod \"redhat-marketplace-rzclt\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.554972 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.624404 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zpp6q"] Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.625618 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.630235 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.630280 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.634766 4982 patch_prober.go:28] interesting pod/console-f9d7485db-nsv6c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.634831 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nsv6c" podUID="dfdbf1b1-2b07-4bff-ab32-759a436b4a78" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.642552 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpp6q"] Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.682665 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.682699 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.690746 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.732033 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-utilities\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.732172 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5g7m\" (UniqueName: \"kubernetes.io/projected/7a881e66-97bc-43af-8be8-1cf11ec61b72-kube-api-access-q5g7m\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.732335 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-catalog-content\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.833880 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-catalog-content\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.834349 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-utilities\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.834389 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5g7m\" (UniqueName: \"kubernetes.io/projected/7a881e66-97bc-43af-8be8-1cf11ec61b72-kube-api-access-q5g7m\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.834682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-catalog-content\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.837899 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-utilities\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.851211 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5g7m\" (UniqueName: \"kubernetes.io/projected/7a881e66-97bc-43af-8be8-1cf11ec61b72-kube-api-access-q5g7m\") pod \"redhat-marketplace-zpp6q\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.864647 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.868171 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:36 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:36 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:36 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.868226 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:36 crc kubenswrapper[4982]: I0224 14:52:36.968433 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.076055 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzclt"] Feb 24 14:52:37 crc kubenswrapper[4982]: W0224 14:52:37.108491 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f482128_8f1f_43bc_b715_03049473f155.slice/crio-e18db69d4813beb64f416fc5efbbe1a70b4dfd717315fba095573a5354cc8ec8 WatchSource:0}: Error finding container e18db69d4813beb64f416fc5efbbe1a70b4dfd717315fba095573a5354cc8ec8: Status 404 returned error can't find the container with id e18db69d4813beb64f416fc5efbbe1a70b4dfd717315fba095573a5354cc8ec8 Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.141948 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" event={"ID":"21f97c87-102b-4f07-a112-f60307bbad0a","Type":"ContainerStarted","Data":"9a3dd2864276b324d34b2c993ea62e10dfea3b5cae75698b1a0d1aa9e28efe61"} Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.142895 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.156022 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bpq5w" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.156058 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzclt" event={"ID":"6f482128-8f1f-43bc-b715-03049473f155","Type":"ContainerStarted","Data":"e18db69d4813beb64f416fc5efbbe1a70b4dfd717315fba095573a5354cc8ec8"} Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.156761 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.163659 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" podStartSLOduration=8.163644933 podStartE2EDuration="8.163644933s" podCreationTimestamp="2026-02-24 14:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:52:37.163431028 +0000 UTC m=+218.782489531" watchObservedRunningTime="2026-02-24 14:52:37.163644933 +0000 UTC m=+218.782703426" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.195717 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpp6q"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.240342 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvwws"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.241441 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.242156 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.242854 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.261435 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.261736 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.261844 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.264801 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvwws"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.272180 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.419328 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gkfkc"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.420358 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.436873 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gkfkc"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.447310 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-catalog-content\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.447410 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ba93aa6-3011-4955-9efe-d53362976685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ba93aa6-3011-4955-9efe-d53362976685\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.447442 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ba93aa6-3011-4955-9efe-d53362976685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ba93aa6-3011-4955-9efe-d53362976685\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.447487 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-utilities\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.447533 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4fvm\" (UniqueName: \"kubernetes.io/projected/ee9cd5d1-3ce1-4722-b3aa-892b33502443-kube-api-access-h4fvm\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548314 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ba93aa6-3011-4955-9efe-d53362976685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ba93aa6-3011-4955-9efe-d53362976685\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548354 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ba93aa6-3011-4955-9efe-d53362976685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ba93aa6-3011-4955-9efe-d53362976685\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548388 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-catalog-content\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548439 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-utilities\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548467 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4fvm\" (UniqueName: \"kubernetes.io/projected/ee9cd5d1-3ce1-4722-b3aa-892b33502443-kube-api-access-h4fvm\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548573 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-utilities\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548601 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-catalog-content\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548636 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4xxz\" (UniqueName: \"kubernetes.io/projected/0c9513b8-113e-4a7d-8c23-b5596888d66a-kube-api-access-b4xxz\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.548913 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ba93aa6-3011-4955-9efe-d53362976685-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ba93aa6-3011-4955-9efe-d53362976685\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.549130 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-utilities\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.549676 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-catalog-content\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.567055 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ba93aa6-3011-4955-9efe-d53362976685-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ba93aa6-3011-4955-9efe-d53362976685\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.571939 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4fvm\" (UniqueName: \"kubernetes.io/projected/ee9cd5d1-3ce1-4722-b3aa-892b33502443-kube-api-access-h4fvm\") pod \"redhat-operators-xvwws\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.581912 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.618781 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.619437 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.617672 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.626775 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.627338 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.632344 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.650246 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4xxz\" (UniqueName: \"kubernetes.io/projected/0c9513b8-113e-4a7d-8c23-b5596888d66a-kube-api-access-b4xxz\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.650307 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-catalog-content\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.650384 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-utilities\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.650851 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-utilities\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.650888 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-catalog-content\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.686587 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4xxz\" (UniqueName: \"kubernetes.io/projected/0c9513b8-113e-4a7d-8c23-b5596888d66a-kube-api-access-b4xxz\") pod \"redhat-operators-gkfkc\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.753069 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.753132 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.773279 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.854995 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.855052 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.855153 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.867759 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:37 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:37 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:37 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.867842 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.870548 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:52:37 crc kubenswrapper[4982]: I0224 14:52:37.950899 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.170292 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f482128-8f1f-43bc-b715-03049473f155" containerID="ff3df357ccf88017218d789c88707d1a0b30a887da0bae389fa2f09228548882" exitCode=0 Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.170402 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzclt" event={"ID":"6f482128-8f1f-43bc-b715-03049473f155","Type":"ContainerDied","Data":"ff3df357ccf88017218d789c88707d1a0b30a887da0bae389fa2f09228548882"} Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.178851 4982 generic.go:334] "Generic (PLEG): container finished" podID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerID="5542ff92ac297599b41027f4e7eb0be512928b0d07f5635a15fb13d554aa8878" exitCode=0 Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.180282 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpp6q" event={"ID":"7a881e66-97bc-43af-8be8-1cf11ec61b72","Type":"ContainerDied","Data":"5542ff92ac297599b41027f4e7eb0be512928b0d07f5635a15fb13d554aa8878"} Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.180311 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpp6q" event={"ID":"7a881e66-97bc-43af-8be8-1cf11ec61b72","Type":"ContainerStarted","Data":"2220dd254cf6a6fab3a2424c77238df4834dac67a3eb0269e6a09c152ea22abf"} Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.738261 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.738648 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.871259 4982 patch_prober.go:28] interesting pod/router-default-5444994796-9x558 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 14:52:38 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Feb 24 14:52:38 crc kubenswrapper[4982]: [+]process-running ok Feb 24 14:52:38 crc kubenswrapper[4982]: healthz check failed Feb 24 14:52:38 crc kubenswrapper[4982]: I0224 14:52:38.871317 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9x558" podUID="e2b79e66-39f3-40e8-ad1d-cdd963a10983" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 14:52:39 crc kubenswrapper[4982]: I0224 14:52:39.183057 4982 ???:1] "http: TLS handshake error from 192.168.126.11:53028: no serving certificate available for the kubelet" Feb 24 14:52:39 crc kubenswrapper[4982]: I0224 14:52:39.963885 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:39 crc kubenswrapper[4982]: I0224 14:52:39.967228 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9x558" Feb 24 14:52:41 crc kubenswrapper[4982]: I0224 14:52:41.220452 4982 ???:1] "http: TLS handshake error from 192.168.126.11:53034: no serving certificate available for the kubelet" Feb 24 14:52:42 crc kubenswrapper[4982]: I0224 14:52:42.031191 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ndstv" Feb 24 14:52:46 crc kubenswrapper[4982]: I0224 14:52:46.402356 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-htd7x container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 24 14:52:46 crc kubenswrapper[4982]: I0224 14:52:46.402919 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-htd7x" podUID="e0fc969b-5b42-4649-9109-d049431cae47" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 24 14:52:46 crc kubenswrapper[4982]: I0224 14:52:46.402356 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-htd7x container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 24 14:52:46 crc kubenswrapper[4982]: I0224 14:52:46.402972 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-htd7x" podUID="e0fc969b-5b42-4649-9109-d049431cae47" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 24 14:52:46 crc kubenswrapper[4982]: I0224 14:52:46.642982 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:46 crc kubenswrapper[4982]: I0224 14:52:46.651163 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:52:48 crc kubenswrapper[4982]: I0224 14:52:48.796138 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8464f9666b-gf5ql"] Feb 24 14:52:48 crc kubenswrapper[4982]: I0224 14:52:48.796840 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" podUID="21f97c87-102b-4f07-a112-f60307bbad0a" containerName="controller-manager" containerID="cri-o://9a3dd2864276b324d34b2c993ea62e10dfea3b5cae75698b1a0d1aa9e28efe61" gracePeriod=30 Feb 24 14:52:48 crc kubenswrapper[4982]: I0224 14:52:48.820306 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx"] Feb 24 14:52:48 crc kubenswrapper[4982]: I0224 14:52:48.820805 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" podUID="8bc928ae-a991-4b6c-80d9-77b0c336a95f" containerName="route-controller-manager" containerID="cri-o://87ba11e2c159808618a7cb9aeaabb4fca49d8bef0ec47bc483c2ef44931eb9b8" gracePeriod=30 Feb 24 14:52:48 crc kubenswrapper[4982]: I0224 14:52:48.987171 4982 generic.go:334] "Generic (PLEG): container finished" podID="21f97c87-102b-4f07-a112-f60307bbad0a" containerID="9a3dd2864276b324d34b2c993ea62e10dfea3b5cae75698b1a0d1aa9e28efe61" exitCode=0 Feb 24 14:52:48 crc kubenswrapper[4982]: I0224 14:52:48.987213 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" event={"ID":"21f97c87-102b-4f07-a112-f60307bbad0a","Type":"ContainerDied","Data":"9a3dd2864276b324d34b2c993ea62e10dfea3b5cae75698b1a0d1aa9e28efe61"} Feb 24 14:52:49 crc kubenswrapper[4982]: I0224 14:52:49.994273 4982 generic.go:334] "Generic (PLEG): container finished" podID="8bc928ae-a991-4b6c-80d9-77b0c336a95f" containerID="87ba11e2c159808618a7cb9aeaabb4fca49d8bef0ec47bc483c2ef44931eb9b8" exitCode=0 Feb 24 14:52:49 crc kubenswrapper[4982]: I0224 14:52:49.994322 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" event={"ID":"8bc928ae-a991-4b6c-80d9-77b0c336a95f","Type":"ContainerDied","Data":"87ba11e2c159808618a7cb9aeaabb4fca49d8bef0ec47bc483c2ef44931eb9b8"} Feb 24 14:52:52 crc kubenswrapper[4982]: I0224 14:52:52.925141 4982 patch_prober.go:28] interesting pod/route-controller-manager-54f7795d4-kz5hx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Feb 24 14:52:52 crc kubenswrapper[4982]: I0224 14:52:52.925611 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" podUID="8bc928ae-a991-4b6c-80d9-77b0c336a95f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Feb 24 14:52:54 crc kubenswrapper[4982]: I0224 14:52:54.820271 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:52:55 crc kubenswrapper[4982]: I0224 14:52:55.523103 4982 patch_prober.go:28] interesting pod/controller-manager-8464f9666b-gf5ql container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 24 14:52:55 crc kubenswrapper[4982]: I0224 14:52:55.523642 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" podUID="21f97c87-102b-4f07-a112-f60307bbad0a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 24 14:52:56 crc kubenswrapper[4982]: I0224 14:52:56.836021 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-htd7x" Feb 24 14:53:01 crc kubenswrapper[4982]: I0224 14:53:01.731075 4982 ???:1] "http: TLS handshake error from 192.168.126.11:47778: no serving certificate available for the kubelet" Feb 24 14:53:03 crc kubenswrapper[4982]: I0224 14:53:03.923417 4982 patch_prober.go:28] interesting pod/route-controller-manager-54f7795d4-kz5hx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 14:53:03 crc kubenswrapper[4982]: I0224 14:53:03.923614 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" podUID="8bc928ae-a991-4b6c-80d9-77b0c336a95f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 14:53:06 crc kubenswrapper[4982]: I0224 14:53:06.523337 4982 patch_prober.go:28] interesting pod/controller-manager-8464f9666b-gf5ql container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 14:53:06 crc kubenswrapper[4982]: I0224 14:53:06.523930 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" podUID="21f97c87-102b-4f07-a112-f60307bbad0a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.225677 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czvmz" Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.415690 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.417343 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.429910 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.605400 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/262ce385-a8fb-4909-a456-7eeea3f77634-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"262ce385-a8fb-4909-a456-7eeea3f77634\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.605616 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/262ce385-a8fb-4909-a456-7eeea3f77634-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"262ce385-a8fb-4909-a456-7eeea3f77634\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.707203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/262ce385-a8fb-4909-a456-7eeea3f77634-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"262ce385-a8fb-4909-a456-7eeea3f77634\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.707095 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/262ce385-a8fb-4909-a456-7eeea3f77634-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"262ce385-a8fb-4909-a456-7eeea3f77634\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.707413 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/262ce385-a8fb-4909-a456-7eeea3f77634-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"262ce385-a8fb-4909-a456-7eeea3f77634\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:07 crc kubenswrapper[4982]: I0224 14:53:07.739745 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/262ce385-a8fb-4909-a456-7eeea3f77634-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"262ce385-a8fb-4909-a456-7eeea3f77634\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:08 crc kubenswrapper[4982]: I0224 14:53:08.039822 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:08 crc kubenswrapper[4982]: I0224 14:53:08.738422 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 14:53:08 crc kubenswrapper[4982]: I0224 14:53:08.738905 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 14:53:09 crc kubenswrapper[4982]: I0224 14:53:09.991206 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:53:09 crc kubenswrapper[4982]: I0224 14:53:09.996074 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.056956 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599554976-8nzdt"] Feb 24 14:53:10 crc kubenswrapper[4982]: E0224 14:53:10.057318 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f97c87-102b-4f07-a112-f60307bbad0a" containerName="controller-manager" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.057344 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f97c87-102b-4f07-a112-f60307bbad0a" containerName="controller-manager" Feb 24 14:53:10 crc kubenswrapper[4982]: E0224 14:53:10.057380 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc928ae-a991-4b6c-80d9-77b0c336a95f" containerName="route-controller-manager" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.057389 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc928ae-a991-4b6c-80d9-77b0c336a95f" containerName="route-controller-manager" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.057566 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc928ae-a991-4b6c-80d9-77b0c336a95f" containerName="route-controller-manager" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.057589 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f97c87-102b-4f07-a112-f60307bbad0a" containerName="controller-manager" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.058229 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.119317 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599554976-8nzdt"] Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148211 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-client-ca\") pod \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148254 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxc2d\" (UniqueName: \"kubernetes.io/projected/8bc928ae-a991-4b6c-80d9-77b0c336a95f-kube-api-access-pxc2d\") pod \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148279 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnrf6\" (UniqueName: \"kubernetes.io/projected/21f97c87-102b-4f07-a112-f60307bbad0a-kube-api-access-tnrf6\") pod \"21f97c87-102b-4f07-a112-f60307bbad0a\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148343 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-proxy-ca-bundles\") pod \"21f97c87-102b-4f07-a112-f60307bbad0a\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148370 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc928ae-a991-4b6c-80d9-77b0c336a95f-serving-cert\") pod \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148388 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-config\") pod \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\" (UID: \"8bc928ae-a991-4b6c-80d9-77b0c336a95f\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148417 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f97c87-102b-4f07-a112-f60307bbad0a-serving-cert\") pod \"21f97c87-102b-4f07-a112-f60307bbad0a\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148437 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-client-ca\") pod \"21f97c87-102b-4f07-a112-f60307bbad0a\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.148468 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-config\") pod \"21f97c87-102b-4f07-a112-f60307bbad0a\" (UID: \"21f97c87-102b-4f07-a112-f60307bbad0a\") " Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149371 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "21f97c87-102b-4f07-a112-f60307bbad0a" (UID: "21f97c87-102b-4f07-a112-f60307bbad0a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149390 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-config" (OuterVolumeSpecName: "config") pod "8bc928ae-a991-4b6c-80d9-77b0c336a95f" (UID: "8bc928ae-a991-4b6c-80d9-77b0c336a95f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149418 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-config" (OuterVolumeSpecName: "config") pod "21f97c87-102b-4f07-a112-f60307bbad0a" (UID: "21f97c87-102b-4f07-a112-f60307bbad0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149410 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-client-ca" (OuterVolumeSpecName: "client-ca") pod "21f97c87-102b-4f07-a112-f60307bbad0a" (UID: "21f97c87-102b-4f07-a112-f60307bbad0a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149448 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-client-ca\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149562 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/133dbd75-9538-4a08-8fb6-faa198403f55-serving-cert\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149595 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-config\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149670 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9v4k\" (UniqueName: \"kubernetes.io/projected/133dbd75-9538-4a08-8fb6-faa198403f55-kube-api-access-d9v4k\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149736 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149747 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149807 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.149843 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f97c87-102b-4f07-a112-f60307bbad0a-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.150057 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-client-ca" (OuterVolumeSpecName: "client-ca") pod "8bc928ae-a991-4b6c-80d9-77b0c336a95f" (UID: "8bc928ae-a991-4b6c-80d9-77b0c336a95f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.153131 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f97c87-102b-4f07-a112-f60307bbad0a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21f97c87-102b-4f07-a112-f60307bbad0a" (UID: "21f97c87-102b-4f07-a112-f60307bbad0a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.153181 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc928ae-a991-4b6c-80d9-77b0c336a95f-kube-api-access-pxc2d" (OuterVolumeSpecName: "kube-api-access-pxc2d") pod "8bc928ae-a991-4b6c-80d9-77b0c336a95f" (UID: "8bc928ae-a991-4b6c-80d9-77b0c336a95f"). InnerVolumeSpecName "kube-api-access-pxc2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.154518 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f97c87-102b-4f07-a112-f60307bbad0a-kube-api-access-tnrf6" (OuterVolumeSpecName: "kube-api-access-tnrf6") pod "21f97c87-102b-4f07-a112-f60307bbad0a" (UID: "21f97c87-102b-4f07-a112-f60307bbad0a"). InnerVolumeSpecName "kube-api-access-tnrf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.160129 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" event={"ID":"21f97c87-102b-4f07-a112-f60307bbad0a","Type":"ContainerDied","Data":"3efa605ee971fe624968d8a8e2fc59cf363b395b1ef400dc294044aa1edfbf54"} Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.160177 4982 scope.go:117] "RemoveContainer" containerID="9a3dd2864276b324d34b2c993ea62e10dfea3b5cae75698b1a0d1aa9e28efe61" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.160203 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464f9666b-gf5ql" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.161699 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" event={"ID":"8bc928ae-a991-4b6c-80d9-77b0c336a95f","Type":"ContainerDied","Data":"6fff53feceb6b4acde575bf50b39162330b5e1d0074f1ce19aab1ea7c6a57f43"} Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.161760 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.171964 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bc928ae-a991-4b6c-80d9-77b0c336a95f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8bc928ae-a991-4b6c-80d9-77b0c336a95f" (UID: "8bc928ae-a991-4b6c-80d9-77b0c336a95f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.242753 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8464f9666b-gf5ql"] Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.245245 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8464f9666b-gf5ql"] Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251126 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9v4k\" (UniqueName: \"kubernetes.io/projected/133dbd75-9538-4a08-8fb6-faa198403f55-kube-api-access-d9v4k\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251267 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-client-ca\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251319 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/133dbd75-9538-4a08-8fb6-faa198403f55-serving-cert\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251341 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-config\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251431 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f97c87-102b-4f07-a112-f60307bbad0a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251445 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bc928ae-a991-4b6c-80d9-77b0c336a95f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251457 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxc2d\" (UniqueName: \"kubernetes.io/projected/8bc928ae-a991-4b6c-80d9-77b0c336a95f-kube-api-access-pxc2d\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251472 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnrf6\" (UniqueName: \"kubernetes.io/projected/21f97c87-102b-4f07-a112-f60307bbad0a-kube-api-access-tnrf6\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.251484 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc928ae-a991-4b6c-80d9-77b0c336a95f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.252584 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-client-ca\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.252966 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-config\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.271086 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/133dbd75-9538-4a08-8fb6-faa198403f55-serving-cert\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.271182 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9v4k\" (UniqueName: \"kubernetes.io/projected/133dbd75-9538-4a08-8fb6-faa198403f55-kube-api-access-d9v4k\") pod \"route-controller-manager-599554976-8nzdt\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.375046 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.490193 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx"] Feb 24 14:53:10 crc kubenswrapper[4982]: I0224 14:53:10.510912 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f7795d4-kz5hx"] Feb 24 14:53:10 crc kubenswrapper[4982]: E0224 14:53:10.569778 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 24 14:53:10 crc kubenswrapper[4982]: E0224 14:53:10.569922 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 14:53:10 crc kubenswrapper[4982]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 24 14:53:10 crc kubenswrapper[4982]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmdm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29532412-hd7v7_openshift-infra(41b9348a-b44f-4ecf-9043-0948b992d64e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 24 14:53:10 crc kubenswrapper[4982]: > logger="UnhandledError" Feb 24 14:53:10 crc kubenswrapper[4982]: E0224 14:53:10.571142 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" podUID="41b9348a-b44f-4ecf-9043-0948b992d64e" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.157273 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f97c87-102b-4f07-a112-f60307bbad0a" path="/var/lib/kubelet/pods/21f97c87-102b-4f07-a112-f60307bbad0a/volumes" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.158910 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc928ae-a991-4b6c-80d9-77b0c336a95f" path="/var/lib/kubelet/pods/8bc928ae-a991-4b6c-80d9-77b0c336a95f/volumes" Feb 24 14:53:11 crc kubenswrapper[4982]: E0224 14:53:11.175837 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" podUID="41b9348a-b44f-4ecf-9043-0948b992d64e" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.627616 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.628629 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.647715 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.775867 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-var-lock\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.775948 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-kubelet-dir\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.775991 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/875ece47-3110-4574-9725-0d76af8a8498-kube-api-access\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.877630 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-var-lock\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.877816 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-var-lock\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.878489 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-kubelet-dir\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.878556 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-kubelet-dir\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.878604 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/875ece47-3110-4574-9725-0d76af8a8498-kube-api-access\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.907264 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/875ece47-3110-4574-9725-0d76af8a8498-kube-api-access\") pod \"installer-9-crc\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:11 crc kubenswrapper[4982]: I0224 14:53:11.953427 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:53:12 crc kubenswrapper[4982]: I0224 14:53:12.995907 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr"] Feb 24 14:53:12 crc kubenswrapper[4982]: I0224 14:53:12.996727 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.000241 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.001672 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.001871 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.002156 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.004739 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr"] Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.005885 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.006467 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.011108 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.101344 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-config\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.101435 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-client-ca\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.101486 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-proxy-ca-bundles\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.101532 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e950fab-d108-486f-83de-8c085384be65-serving-cert\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.101667 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp457\" (UniqueName: \"kubernetes.io/projected/4e950fab-d108-486f-83de-8c085384be65-kube-api-access-rp457\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.203490 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-client-ca\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.203557 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-proxy-ca-bundles\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.203584 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e950fab-d108-486f-83de-8c085384be65-serving-cert\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.203674 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp457\" (UniqueName: \"kubernetes.io/projected/4e950fab-d108-486f-83de-8c085384be65-kube-api-access-rp457\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.203744 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-config\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.206089 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-client-ca\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.206164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-proxy-ca-bundles\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.212057 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e950fab-d108-486f-83de-8c085384be65-serving-cert\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.232414 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp457\" (UniqueName: \"kubernetes.io/projected/4e950fab-d108-486f-83de-8c085384be65-kube-api-access-rp457\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.340203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-config\") pod \"controller-manager-7bdbc65db5-lfgjr\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:13 crc kubenswrapper[4982]: I0224 14:53:13.617308 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:15 crc kubenswrapper[4982]: E0224 14:53:15.464380 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 24 14:53:15 crc kubenswrapper[4982]: E0224 14:53:15.464751 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsqb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vndg2_openshift-marketplace(ddc9427b-029e-49c4-bce0-b2d40b9259c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 14:53:15 crc kubenswrapper[4982]: E0224 14:53:15.466072 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vndg2" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" Feb 24 14:53:15 crc kubenswrapper[4982]: E0224 14:53:15.467866 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 24 14:53:15 crc kubenswrapper[4982]: E0224 14:53:15.468005 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87l2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qrnhb_openshift-marketplace(ef70a241-8f38-4315-a1c9-a6df74030a41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 14:53:15 crc kubenswrapper[4982]: E0224 14:53:15.469184 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qrnhb" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" Feb 24 14:53:16 crc kubenswrapper[4982]: E0224 14:53:16.942647 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vndg2" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" Feb 24 14:53:16 crc kubenswrapper[4982]: E0224 14:53:16.942850 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qrnhb" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" Feb 24 14:53:16 crc kubenswrapper[4982]: I0224 14:53:16.993434 4982 scope.go:117] "RemoveContainer" containerID="87ba11e2c159808618a7cb9aeaabb4fca49d8bef0ec47bc483c2ef44931eb9b8" Feb 24 14:53:17 crc kubenswrapper[4982]: E0224 14:53:17.057822 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 24 14:53:17 crc kubenswrapper[4982]: E0224 14:53:17.057982 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58nmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-stdjb_openshift-marketplace(e41be673-ff4a-465b-a472-22f962fbf6ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 14:53:17 crc kubenswrapper[4982]: E0224 14:53:17.059173 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-stdjb" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" Feb 24 14:53:17 crc kubenswrapper[4982]: E0224 14:53:17.065940 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 24 14:53:17 crc kubenswrapper[4982]: E0224 14:53:17.066054 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdb8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bv7hr_openshift-marketplace(5355d669-4f87-48b6-b389-09f97979f9c6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 14:53:17 crc kubenswrapper[4982]: E0224 14:53:17.067192 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bv7hr" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" Feb 24 14:53:17 crc kubenswrapper[4982]: I0224 14:53:17.251134 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 14:53:17 crc kubenswrapper[4982]: I0224 14:53:17.388614 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvwws"] Feb 24 14:53:17 crc kubenswrapper[4982]: I0224 14:53:17.390534 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gkfkc"] Feb 24 14:53:17 crc kubenswrapper[4982]: I0224 14:53:17.427401 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 14:53:18 crc kubenswrapper[4982]: E0224 14:53:18.331185 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-stdjb" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" Feb 24 14:53:18 crc kubenswrapper[4982]: E0224 14:53:18.331213 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bv7hr" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" Feb 24 14:53:18 crc kubenswrapper[4982]: W0224 14:53:18.354831 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c9513b8_113e_4a7d_8c23_b5596888d66a.slice/crio-07de65f0157c93e2d98b7b09e15ed9f58436eb8497b30442d5ab66e03e99cd17 WatchSource:0}: Error finding container 07de65f0157c93e2d98b7b09e15ed9f58436eb8497b30442d5ab66e03e99cd17: Status 404 returned error can't find the container with id 07de65f0157c93e2d98b7b09e15ed9f58436eb8497b30442d5ab66e03e99cd17 Feb 24 14:53:18 crc kubenswrapper[4982]: W0224 14:53:18.355938 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4ba93aa6_3011_4955_9efe_d53362976685.slice/crio-7858d0b3ddfd3d96a5899ad6d6c2ab3e3dc0f46871f90c2f7a2418e36a696cb6 WatchSource:0}: Error finding container 7858d0b3ddfd3d96a5899ad6d6c2ab3e3dc0f46871f90c2f7a2418e36a696cb6: Status 404 returned error can't find the container with id 7858d0b3ddfd3d96a5899ad6d6c2ab3e3dc0f46871f90c2f7a2418e36a696cb6 Feb 24 14:53:18 crc kubenswrapper[4982]: W0224 14:53:18.360929 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee9cd5d1_3ce1_4722_b3aa_892b33502443.slice/crio-591ddb0d2146605148cff1fd6503046ee0a4a3b4e8d8b3dad9b38d4fb5c55b61 WatchSource:0}: Error finding container 591ddb0d2146605148cff1fd6503046ee0a4a3b4e8d8b3dad9b38d4fb5c55b61: Status 404 returned error can't find the container with id 591ddb0d2146605148cff1fd6503046ee0a4a3b4e8d8b3dad9b38d4fb5c55b61 Feb 24 14:53:18 crc kubenswrapper[4982]: W0224 14:53:18.423206 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4c357ab0_449d_4192_8e06_c09c0f8b59c6.slice/crio-6aad722e9ff236a7ddf849b69467cbffe7bf64b1da553f83c1b25de651480067 WatchSource:0}: Error finding container 6aad722e9ff236a7ddf849b69467cbffe7bf64b1da553f83c1b25de651480067: Status 404 returned error can't find the container with id 6aad722e9ff236a7ddf849b69467cbffe7bf64b1da553f83c1b25de651480067 Feb 24 14:53:18 crc kubenswrapper[4982]: I0224 14:53:18.534592 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 14:53:18 crc kubenswrapper[4982]: I0224 14:53:18.899997 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr"] Feb 24 14:53:18 crc kubenswrapper[4982]: W0224 14:53:18.925602 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e950fab_d108_486f_83de_8c085384be65.slice/crio-357685c836ebee2ca5f8bc126ec969ed0e3240e8abdb0fc2f7e9fbf9483f868d WatchSource:0}: Error finding container 357685c836ebee2ca5f8bc126ec969ed0e3240e8abdb0fc2f7e9fbf9483f868d: Status 404 returned error can't find the container with id 357685c836ebee2ca5f8bc126ec969ed0e3240e8abdb0fc2f7e9fbf9483f868d Feb 24 14:53:18 crc kubenswrapper[4982]: I0224 14:53:18.957321 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599554976-8nzdt"] Feb 24 14:53:18 crc kubenswrapper[4982]: I0224 14:53:18.967448 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 14:53:19 crc kubenswrapper[4982]: W0224 14:53:19.001892 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod875ece47_3110_4574_9725_0d76af8a8498.slice/crio-77eaf0feaea2c5426a276287fac8ed8eb3c047345f0714b4a12e1f40e36f7bcb WatchSource:0}: Error finding container 77eaf0feaea2c5426a276287fac8ed8eb3c047345f0714b4a12e1f40e36f7bcb: Status 404 returned error can't find the container with id 77eaf0feaea2c5426a276287fac8ed8eb3c047345f0714b4a12e1f40e36f7bcb Feb 24 14:53:19 crc kubenswrapper[4982]: W0224 14:53:19.002261 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod133dbd75_9538_4a08_8fb6_faa198403f55.slice/crio-b85a9087a2fe2665d27d8141442df47408977972e40b32cbca4f77239729edb3 WatchSource:0}: Error finding container b85a9087a2fe2665d27d8141442df47408977972e40b32cbca4f77239729edb3: Status 404 returned error can't find the container with id b85a9087a2fe2665d27d8141442df47408977972e40b32cbca4f77239729edb3 Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.231690 4982 generic.go:334] "Generic (PLEG): container finished" podID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerID="9c8866c562c405c05587099ea9d575612c2e92b2c349f87fc9e6ce3a861d2741" exitCode=0 Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.231978 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkfkc" event={"ID":"0c9513b8-113e-4a7d-8c23-b5596888d66a","Type":"ContainerDied","Data":"9c8866c562c405c05587099ea9d575612c2e92b2c349f87fc9e6ce3a861d2741"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.232007 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkfkc" event={"ID":"0c9513b8-113e-4a7d-8c23-b5596888d66a","Type":"ContainerStarted","Data":"07de65f0157c93e2d98b7b09e15ed9f58436eb8497b30442d5ab66e03e99cd17"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.238660 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f482128-8f1f-43bc-b715-03049473f155" containerID="52c37b234f73f64a3e962c5797162e7f29b78a0427ddd14b01c4669edfbf927a" exitCode=0 Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.238986 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzclt" event={"ID":"6f482128-8f1f-43bc-b715-03049473f155","Type":"ContainerDied","Data":"52c37b234f73f64a3e962c5797162e7f29b78a0427ddd14b01c4669edfbf927a"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.247886 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" event={"ID":"4e950fab-d108-486f-83de-8c085384be65","Type":"ContainerStarted","Data":"0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.247931 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" event={"ID":"4e950fab-d108-486f-83de-8c085384be65","Type":"ContainerStarted","Data":"357685c836ebee2ca5f8bc126ec969ed0e3240e8abdb0fc2f7e9fbf9483f868d"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.248743 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.256635 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.258170 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4ba93aa6-3011-4955-9efe-d53362976685","Type":"ContainerStarted","Data":"ee61d71277778af5c3c5d43fb78cbe9db649a3aa6022f4a11b8b5fd10fe2171e"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.258401 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4ba93aa6-3011-4955-9efe-d53362976685","Type":"ContainerStarted","Data":"7858d0b3ddfd3d96a5899ad6d6c2ab3e3dc0f46871f90c2f7a2418e36a696cb6"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.263642 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4c357ab0-449d-4192-8e06-c09c0f8b59c6","Type":"ContainerStarted","Data":"235f75457bdb09e9c45c3b5718ba607c6881af89d6dcdad3f8ac4e6453edac00"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.263722 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4c357ab0-449d-4192-8e06-c09c0f8b59c6","Type":"ContainerStarted","Data":"6aad722e9ff236a7ddf849b69467cbffe7bf64b1da553f83c1b25de651480067"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.263741 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"875ece47-3110-4574-9725-0d76af8a8498","Type":"ContainerStarted","Data":"77eaf0feaea2c5426a276287fac8ed8eb3c047345f0714b4a12e1f40e36f7bcb"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.269012 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerID="3ffa466dc0c83de94f5eee93f992f8cd8b2ed2db77016ff31c430fde6502cfb1" exitCode=0 Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.269246 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvwws" event={"ID":"ee9cd5d1-3ce1-4722-b3aa-892b33502443","Type":"ContainerDied","Data":"3ffa466dc0c83de94f5eee93f992f8cd8b2ed2db77016ff31c430fde6502cfb1"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.269367 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvwws" event={"ID":"ee9cd5d1-3ce1-4722-b3aa-892b33502443","Type":"ContainerStarted","Data":"591ddb0d2146605148cff1fd6503046ee0a4a3b4e8d8b3dad9b38d4fb5c55b61"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.283362 4982 generic.go:334] "Generic (PLEG): container finished" podID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerID="deac5d73be4985eebc7a645309d5b73d28acc8f1ba47f53bd3cb7c4abdecf8ed" exitCode=0 Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.283440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpp6q" event={"ID":"7a881e66-97bc-43af-8be8-1cf11ec61b72","Type":"ContainerDied","Data":"deac5d73be4985eebc7a645309d5b73d28acc8f1ba47f53bd3cb7c4abdecf8ed"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.303855 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" event={"ID":"133dbd75-9538-4a08-8fb6-faa198403f55","Type":"ContainerStarted","Data":"3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.303899 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" event={"ID":"133dbd75-9538-4a08-8fb6-faa198403f55","Type":"ContainerStarted","Data":"b85a9087a2fe2665d27d8141442df47408977972e40b32cbca4f77239729edb3"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.304954 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.306176 4982 patch_prober.go:28] interesting pod/route-controller-manager-599554976-8nzdt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.306214 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" podUID="133dbd75-9538-4a08-8fb6-faa198403f55" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.318869 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"262ce385-a8fb-4909-a456-7eeea3f77634","Type":"ContainerStarted","Data":"8cde22a7d056e09e934db87f3c3fab6d75f18bb4baba0f861b3056d7e52ebe66"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.318904 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"262ce385-a8fb-4909-a456-7eeea3f77634","Type":"ContainerStarted","Data":"d2998dbc5757b10207d735c1d68bd203cf738b909691a9d41c0b0b972ea23849"} Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.352314 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=42.352299572 podStartE2EDuration="42.352299572s" podCreationTimestamp="2026-02-24 14:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:53:19.350978176 +0000 UTC m=+260.970036669" watchObservedRunningTime="2026-02-24 14:53:19.352299572 +0000 UTC m=+260.971358065" Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.368264 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=42.368247592 podStartE2EDuration="42.368247592s" podCreationTimestamp="2026-02-24 14:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:53:19.36560204 +0000 UTC m=+260.984660533" watchObservedRunningTime="2026-02-24 14:53:19.368247592 +0000 UTC m=+260.987306085" Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.388778 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" podStartSLOduration=11.388760566 podStartE2EDuration="11.388760566s" podCreationTimestamp="2026-02-24 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:53:19.387176043 +0000 UTC m=+261.006234536" watchObservedRunningTime="2026-02-24 14:53:19.388760566 +0000 UTC m=+261.007819059" Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.466484 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" podStartSLOduration=11.466468384 podStartE2EDuration="11.466468384s" podCreationTimestamp="2026-02-24 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:53:19.462135877 +0000 UTC m=+261.081194390" watchObservedRunningTime="2026-02-24 14:53:19.466468384 +0000 UTC m=+261.085526877" Feb 24 14:53:19 crc kubenswrapper[4982]: I0224 14:53:19.497863 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=12.497845952 podStartE2EDuration="12.497845952s" podCreationTimestamp="2026-02-24 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:53:19.493625008 +0000 UTC m=+261.112683501" watchObservedRunningTime="2026-02-24 14:53:19.497845952 +0000 UTC m=+261.116904445" Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.334410 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpp6q" event={"ID":"7a881e66-97bc-43af-8be8-1cf11ec61b72","Type":"ContainerStarted","Data":"9e7f098c1b6625752ba9feb51516b2cc61947f9ea574c8fd92aa6af12ae60db0"} Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.340778 4982 generic.go:334] "Generic (PLEG): container finished" podID="262ce385-a8fb-4909-a456-7eeea3f77634" containerID="8cde22a7d056e09e934db87f3c3fab6d75f18bb4baba0f861b3056d7e52ebe66" exitCode=0 Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.340890 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"262ce385-a8fb-4909-a456-7eeea3f77634","Type":"ContainerDied","Data":"8cde22a7d056e09e934db87f3c3fab6d75f18bb4baba0f861b3056d7e52ebe66"} Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.349190 4982 generic.go:334] "Generic (PLEG): container finished" podID="4ba93aa6-3011-4955-9efe-d53362976685" containerID="ee61d71277778af5c3c5d43fb78cbe9db649a3aa6022f4a11b8b5fd10fe2171e" exitCode=0 Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.349303 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4ba93aa6-3011-4955-9efe-d53362976685","Type":"ContainerDied","Data":"ee61d71277778af5c3c5d43fb78cbe9db649a3aa6022f4a11b8b5fd10fe2171e"} Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.351287 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"875ece47-3110-4574-9725-0d76af8a8498","Type":"ContainerStarted","Data":"3eb48e187d946ec8603bf9f09151fc5cedc637f631394f5292966fa809de1357"} Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.353311 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zpp6q" podStartSLOduration=12.568758159 podStartE2EDuration="44.353300701s" podCreationTimestamp="2026-02-24 14:52:36 +0000 UTC" firstStartedPulling="2026-02-24 14:52:47.967414522 +0000 UTC m=+229.586473015" lastFinishedPulling="2026-02-24 14:53:19.751957064 +0000 UTC m=+261.371015557" observedRunningTime="2026-02-24 14:53:20.352974571 +0000 UTC m=+261.972033084" watchObservedRunningTime="2026-02-24 14:53:20.353300701 +0000 UTC m=+261.972359194" Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.354157 4982 generic.go:334] "Generic (PLEG): container finished" podID="4c357ab0-449d-4192-8e06-c09c0f8b59c6" containerID="235f75457bdb09e9c45c3b5718ba607c6881af89d6dcdad3f8ac4e6453edac00" exitCode=0 Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.354277 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4c357ab0-449d-4192-8e06-c09c0f8b59c6","Type":"ContainerDied","Data":"235f75457bdb09e9c45c3b5718ba607c6881af89d6dcdad3f8ac4e6453edac00"} Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.358947 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzclt" event={"ID":"6f482128-8f1f-43bc-b715-03049473f155","Type":"ContainerStarted","Data":"21f25b7b3df41cd9c2ac09af30d904cf8da15f52894f1ea27461071ca796157e"} Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.364246 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.370185 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=9.37016728 podStartE2EDuration="9.37016728s" podCreationTimestamp="2026-02-24 14:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:53:20.368287057 +0000 UTC m=+261.987345560" watchObservedRunningTime="2026-02-24 14:53:20.37016728 +0000 UTC m=+261.989225773" Feb 24 14:53:20 crc kubenswrapper[4982]: I0224 14:53:20.467359 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rzclt" podStartSLOduration=12.790365455 podStartE2EDuration="44.467342392s" podCreationTimestamp="2026-02-24 14:52:36 +0000 UTC" firstStartedPulling="2026-02-24 14:52:47.96732936 +0000 UTC m=+229.586387893" lastFinishedPulling="2026-02-24 14:53:19.644306337 +0000 UTC m=+261.263364830" observedRunningTime="2026-02-24 14:53:20.466165349 +0000 UTC m=+262.085223842" watchObservedRunningTime="2026-02-24 14:53:20.467342392 +0000 UTC m=+262.086400885" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.687745 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.747072 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kube-api-access\") pod \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\" (UID: \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\") " Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.747150 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kubelet-dir\") pod \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\" (UID: \"4c357ab0-449d-4192-8e06-c09c0f8b59c6\") " Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.747270 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4c357ab0-449d-4192-8e06-c09c0f8b59c6" (UID: "4c357ab0-449d-4192-8e06-c09c0f8b59c6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.747431 4982 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.752519 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.754181 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4c357ab0-449d-4192-8e06-c09c0f8b59c6" (UID: "4c357ab0-449d-4192-8e06-c09c0f8b59c6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.758286 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.848734 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/262ce385-a8fb-4909-a456-7eeea3f77634-kubelet-dir\") pod \"262ce385-a8fb-4909-a456-7eeea3f77634\" (UID: \"262ce385-a8fb-4909-a456-7eeea3f77634\") " Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.848782 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ba93aa6-3011-4955-9efe-d53362976685-kubelet-dir\") pod \"4ba93aa6-3011-4955-9efe-d53362976685\" (UID: \"4ba93aa6-3011-4955-9efe-d53362976685\") " Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.848867 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/262ce385-a8fb-4909-a456-7eeea3f77634-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "262ce385-a8fb-4909-a456-7eeea3f77634" (UID: "262ce385-a8fb-4909-a456-7eeea3f77634"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.848901 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/262ce385-a8fb-4909-a456-7eeea3f77634-kube-api-access\") pod \"262ce385-a8fb-4909-a456-7eeea3f77634\" (UID: \"262ce385-a8fb-4909-a456-7eeea3f77634\") " Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.848927 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ba93aa6-3011-4955-9efe-d53362976685-kube-api-access\") pod \"4ba93aa6-3011-4955-9efe-d53362976685\" (UID: \"4ba93aa6-3011-4955-9efe-d53362976685\") " Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.848944 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba93aa6-3011-4955-9efe-d53362976685-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ba93aa6-3011-4955-9efe-d53362976685" (UID: "4ba93aa6-3011-4955-9efe-d53362976685"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.849378 4982 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/262ce385-a8fb-4909-a456-7eeea3f77634-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.849398 4982 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ba93aa6-3011-4955-9efe-d53362976685-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.849407 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c357ab0-449d-4192-8e06-c09c0f8b59c6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.851738 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba93aa6-3011-4955-9efe-d53362976685-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ba93aa6-3011-4955-9efe-d53362976685" (UID: "4ba93aa6-3011-4955-9efe-d53362976685"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.851929 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262ce385-a8fb-4909-a456-7eeea3f77634-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "262ce385-a8fb-4909-a456-7eeea3f77634" (UID: "262ce385-a8fb-4909-a456-7eeea3f77634"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.950908 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/262ce385-a8fb-4909-a456-7eeea3f77634-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:21 crc kubenswrapper[4982]: I0224 14:53:21.950942 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ba93aa6-3011-4955-9efe-d53362976685-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.373832 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.373819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4c357ab0-449d-4192-8e06-c09c0f8b59c6","Type":"ContainerDied","Data":"6aad722e9ff236a7ddf849b69467cbffe7bf64b1da553f83c1b25de651480067"} Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.373966 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aad722e9ff236a7ddf849b69467cbffe7bf64b1da553f83c1b25de651480067" Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.375941 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"262ce385-a8fb-4909-a456-7eeea3f77634","Type":"ContainerDied","Data":"d2998dbc5757b10207d735c1d68bd203cf738b909691a9d41c0b0b972ea23849"} Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.375989 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2998dbc5757b10207d735c1d68bd203cf738b909691a9d41c0b0b972ea23849" Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.376003 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.377776 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4ba93aa6-3011-4955-9efe-d53362976685","Type":"ContainerDied","Data":"7858d0b3ddfd3d96a5899ad6d6c2ab3e3dc0f46871f90c2f7a2418e36a696cb6"} Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.377837 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7858d0b3ddfd3d96a5899ad6d6c2ab3e3dc0f46871f90c2f7a2418e36a696cb6" Feb 24 14:53:22 crc kubenswrapper[4982]: I0224 14:53:22.377802 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 14:53:26 crc kubenswrapper[4982]: I0224 14:53:26.555778 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:53:26 crc kubenswrapper[4982]: I0224 14:53:26.556270 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:53:26 crc kubenswrapper[4982]: I0224 14:53:26.690306 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:53:26 crc kubenswrapper[4982]: I0224 14:53:26.968797 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:53:26 crc kubenswrapper[4982]: I0224 14:53:26.969079 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:53:27 crc kubenswrapper[4982]: I0224 14:53:27.008157 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:53:27 crc kubenswrapper[4982]: I0224 14:53:27.444604 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:53:27 crc kubenswrapper[4982]: I0224 14:53:27.447075 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:53:27 crc kubenswrapper[4982]: I0224 14:53:27.880638 4982 csr.go:261] certificate signing request csr-cmpbl is approved, waiting to be issued Feb 24 14:53:27 crc kubenswrapper[4982]: I0224 14:53:27.888950 4982 csr.go:257] certificate signing request csr-cmpbl is issued Feb 24 14:53:27 crc kubenswrapper[4982]: I0224 14:53:27.914680 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpp6q"] Feb 24 14:53:28 crc kubenswrapper[4982]: I0224 14:53:28.410118 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvwws" event={"ID":"ee9cd5d1-3ce1-4722-b3aa-892b33502443","Type":"ContainerStarted","Data":"7cc68c3c8fd4404c6193cecd509e0b4736b396791b46e54417d0a9b79dd6969f"} Feb 24 14:53:28 crc kubenswrapper[4982]: I0224 14:53:28.412828 4982 generic.go:334] "Generic (PLEG): container finished" podID="41b9348a-b44f-4ecf-9043-0948b992d64e" containerID="a2a34f17354d1fd4d5d21f48d2aef3a0361410c6416d0ce43a0ca7b4f3a4d0f9" exitCode=0 Feb 24 14:53:28 crc kubenswrapper[4982]: I0224 14:53:28.412900 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" event={"ID":"41b9348a-b44f-4ecf-9043-0948b992d64e","Type":"ContainerDied","Data":"a2a34f17354d1fd4d5d21f48d2aef3a0361410c6416d0ce43a0ca7b4f3a4d0f9"} Feb 24 14:53:28 crc kubenswrapper[4982]: I0224 14:53:28.414768 4982 generic.go:334] "Generic (PLEG): container finished" podID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerID="3902f61bec0daa63db6eaed533d98df8018343a68b54ce6229a4772bba523bb0" exitCode=0 Feb 24 14:53:28 crc kubenswrapper[4982]: I0224 14:53:28.415166 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkfkc" event={"ID":"0c9513b8-113e-4a7d-8c23-b5596888d66a","Type":"ContainerDied","Data":"3902f61bec0daa63db6eaed533d98df8018343a68b54ce6229a4772bba523bb0"} Feb 24 14:53:28 crc kubenswrapper[4982]: I0224 14:53:28.890519 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-21 09:06:35.600578401 +0000 UTC Feb 24 14:53:28 crc kubenswrapper[4982]: I0224 14:53:28.890586 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6474h13m6.709998926s for next certificate rotation Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.427282 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerID="7cc68c3c8fd4404c6193cecd509e0b4736b396791b46e54417d0a9b79dd6969f" exitCode=0 Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.428241 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvwws" event={"ID":"ee9cd5d1-3ce1-4722-b3aa-892b33502443","Type":"ContainerDied","Data":"7cc68c3c8fd4404c6193cecd509e0b4736b396791b46e54417d0a9b79dd6969f"} Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.428381 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zpp6q" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerName="registry-server" containerID="cri-o://9e7f098c1b6625752ba9feb51516b2cc61947f9ea574c8fd92aa6af12ae60db0" gracePeriod=2 Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.773795 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.871291 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmdm4\" (UniqueName: \"kubernetes.io/projected/41b9348a-b44f-4ecf-9043-0948b992d64e-kube-api-access-zmdm4\") pod \"41b9348a-b44f-4ecf-9043-0948b992d64e\" (UID: \"41b9348a-b44f-4ecf-9043-0948b992d64e\") " Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.876628 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b9348a-b44f-4ecf-9043-0948b992d64e-kube-api-access-zmdm4" (OuterVolumeSpecName: "kube-api-access-zmdm4") pod "41b9348a-b44f-4ecf-9043-0948b992d64e" (UID: "41b9348a-b44f-4ecf-9043-0948b992d64e"). InnerVolumeSpecName "kube-api-access-zmdm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.891739 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-14 17:17:34.073618917 +0000 UTC Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.891771 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7778h24m4.181850396s for next certificate rotation Feb 24 14:53:29 crc kubenswrapper[4982]: I0224 14:53:29.972040 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmdm4\" (UniqueName: \"kubernetes.io/projected/41b9348a-b44f-4ecf-9043-0948b992d64e-kube-api-access-zmdm4\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.434209 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.434200 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532412-hd7v7" event={"ID":"41b9348a-b44f-4ecf-9043-0948b992d64e","Type":"ContainerDied","Data":"5a6535ad86758c252192ca79eb05858dd24068e612b50fdfde61e7be82b81c7d"} Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.434354 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a6535ad86758c252192ca79eb05858dd24068e612b50fdfde61e7be82b81c7d" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.437209 4982 generic.go:334] "Generic (PLEG): container finished" podID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerID="9e7f098c1b6625752ba9feb51516b2cc61947f9ea574c8fd92aa6af12ae60db0" exitCode=0 Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.437273 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpp6q" event={"ID":"7a881e66-97bc-43af-8be8-1cf11ec61b72","Type":"ContainerDied","Data":"9e7f098c1b6625752ba9feb51516b2cc61947f9ea574c8fd92aa6af12ae60db0"} Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.439704 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkfkc" event={"ID":"0c9513b8-113e-4a7d-8c23-b5596888d66a","Type":"ContainerStarted","Data":"3ae72ab6e3dbdbba61012364aa169afa70130dfcbd97c5ae047d98d2659c2ad3"} Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.465673 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gkfkc" podStartSLOduration=43.426203561 podStartE2EDuration="53.465652822s" podCreationTimestamp="2026-02-24 14:52:37 +0000 UTC" firstStartedPulling="2026-02-24 14:53:19.237698066 +0000 UTC m=+260.856756559" lastFinishedPulling="2026-02-24 14:53:29.277147327 +0000 UTC m=+270.896205820" observedRunningTime="2026-02-24 14:53:30.462043551 +0000 UTC m=+272.081102064" watchObservedRunningTime="2026-02-24 14:53:30.465652822 +0000 UTC m=+272.084711315" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.759081 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.786541 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-utilities\") pod \"7a881e66-97bc-43af-8be8-1cf11ec61b72\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.786693 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5g7m\" (UniqueName: \"kubernetes.io/projected/7a881e66-97bc-43af-8be8-1cf11ec61b72-kube-api-access-q5g7m\") pod \"7a881e66-97bc-43af-8be8-1cf11ec61b72\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.786743 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-catalog-content\") pod \"7a881e66-97bc-43af-8be8-1cf11ec61b72\" (UID: \"7a881e66-97bc-43af-8be8-1cf11ec61b72\") " Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.787508 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-utilities" (OuterVolumeSpecName: "utilities") pod "7a881e66-97bc-43af-8be8-1cf11ec61b72" (UID: "7a881e66-97bc-43af-8be8-1cf11ec61b72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.792196 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a881e66-97bc-43af-8be8-1cf11ec61b72-kube-api-access-q5g7m" (OuterVolumeSpecName: "kube-api-access-q5g7m") pod "7a881e66-97bc-43af-8be8-1cf11ec61b72" (UID: "7a881e66-97bc-43af-8be8-1cf11ec61b72"). InnerVolumeSpecName "kube-api-access-q5g7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.874430 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a881e66-97bc-43af-8be8-1cf11ec61b72" (UID: "7a881e66-97bc-43af-8be8-1cf11ec61b72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.888043 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5g7m\" (UniqueName: \"kubernetes.io/projected/7a881e66-97bc-43af-8be8-1cf11ec61b72-kube-api-access-q5g7m\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.888074 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:30 crc kubenswrapper[4982]: I0224 14:53:30.888084 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a881e66-97bc-43af-8be8-1cf11ec61b72-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:31 crc kubenswrapper[4982]: I0224 14:53:31.454318 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zpp6q" Feb 24 14:53:31 crc kubenswrapper[4982]: I0224 14:53:31.454716 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpp6q" event={"ID":"7a881e66-97bc-43af-8be8-1cf11ec61b72","Type":"ContainerDied","Data":"2220dd254cf6a6fab3a2424c77238df4834dac67a3eb0269e6a09c152ea22abf"} Feb 24 14:53:31 crc kubenswrapper[4982]: I0224 14:53:31.454961 4982 scope.go:117] "RemoveContainer" containerID="9e7f098c1b6625752ba9feb51516b2cc61947f9ea574c8fd92aa6af12ae60db0" Feb 24 14:53:31 crc kubenswrapper[4982]: I0224 14:53:31.474320 4982 scope.go:117] "RemoveContainer" containerID="deac5d73be4985eebc7a645309d5b73d28acc8f1ba47f53bd3cb7c4abdecf8ed" Feb 24 14:53:31 crc kubenswrapper[4982]: I0224 14:53:31.480956 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpp6q"] Feb 24 14:53:31 crc kubenswrapper[4982]: I0224 14:53:31.483432 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpp6q"] Feb 24 14:53:31 crc kubenswrapper[4982]: I0224 14:53:31.492717 4982 scope.go:117] "RemoveContainer" containerID="5542ff92ac297599b41027f4e7eb0be512928b0d07f5635a15fb13d554aa8878" Feb 24 14:53:33 crc kubenswrapper[4982]: I0224 14:53:33.160267 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" path="/var/lib/kubelet/pods/7a881e66-97bc-43af-8be8-1cf11ec61b72/volumes" Feb 24 14:53:33 crc kubenswrapper[4982]: I0224 14:53:33.469469 4982 generic.go:334] "Generic (PLEG): container finished" podID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerID="7a3e7a6df569f4f6b595f721597f754515ff10e96f59fc2c39bf21bb89ca5c91" exitCode=0 Feb 24 14:53:33 crc kubenswrapper[4982]: I0224 14:53:33.469542 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vndg2" event={"ID":"ddc9427b-029e-49c4-bce0-b2d40b9259c8","Type":"ContainerDied","Data":"7a3e7a6df569f4f6b595f721597f754515ff10e96f59fc2c39bf21bb89ca5c91"} Feb 24 14:53:33 crc kubenswrapper[4982]: I0224 14:53:33.471454 4982 generic.go:334] "Generic (PLEG): container finished" podID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerID="fd15ec9eea1e1b9847ca155db20f6ff4d652d2b13fa6c9ca292ba5175e27e319" exitCode=0 Feb 24 14:53:33 crc kubenswrapper[4982]: I0224 14:53:33.471536 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrnhb" event={"ID":"ef70a241-8f38-4315-a1c9-a6df74030a41","Type":"ContainerDied","Data":"fd15ec9eea1e1b9847ca155db20f6ff4d652d2b13fa6c9ca292ba5175e27e319"} Feb 24 14:53:33 crc kubenswrapper[4982]: I0224 14:53:33.474170 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvwws" event={"ID":"ee9cd5d1-3ce1-4722-b3aa-892b33502443","Type":"ContainerStarted","Data":"e06d51792bf74de0566b614c58673fefa982a507bc7142c43671e9080e46ab78"} Feb 24 14:53:33 crc kubenswrapper[4982]: I0224 14:53:33.510396 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvwws" podStartSLOduration=43.29266478 podStartE2EDuration="56.510379055s" podCreationTimestamp="2026-02-24 14:52:37 +0000 UTC" firstStartedPulling="2026-02-24 14:53:19.273681719 +0000 UTC m=+260.892740212" lastFinishedPulling="2026-02-24 14:53:32.491395994 +0000 UTC m=+274.110454487" observedRunningTime="2026-02-24 14:53:33.507885115 +0000 UTC m=+275.126943608" watchObservedRunningTime="2026-02-24 14:53:33.510379055 +0000 UTC m=+275.129437548" Feb 24 14:53:34 crc kubenswrapper[4982]: I0224 14:53:34.481235 4982 generic.go:334] "Generic (PLEG): container finished" podID="5355d669-4f87-48b6-b389-09f97979f9c6" containerID="611e505cbdcc3a2065d1045390023bf768bbb05f732e51f5616e4ab94e9841eb" exitCode=0 Feb 24 14:53:34 crc kubenswrapper[4982]: I0224 14:53:34.481413 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv7hr" event={"ID":"5355d669-4f87-48b6-b389-09f97979f9c6","Type":"ContainerDied","Data":"611e505cbdcc3a2065d1045390023bf768bbb05f732e51f5616e4ab94e9841eb"} Feb 24 14:53:37 crc kubenswrapper[4982]: I0224 14:53:37.583105 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:53:37 crc kubenswrapper[4982]: I0224 14:53:37.583431 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:53:37 crc kubenswrapper[4982]: I0224 14:53:37.774450 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:53:37 crc kubenswrapper[4982]: I0224 14:53:37.774536 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:53:37 crc kubenswrapper[4982]: I0224 14:53:37.820456 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:53:38 crc kubenswrapper[4982]: I0224 14:53:38.560733 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:53:38 crc kubenswrapper[4982]: I0224 14:53:38.641819 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xvwws" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="registry-server" probeResult="failure" output=< Feb 24 14:53:38 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 14:53:38 crc kubenswrapper[4982]: > Feb 24 14:53:38 crc kubenswrapper[4982]: I0224 14:53:38.738408 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 14:53:38 crc kubenswrapper[4982]: I0224 14:53:38.738459 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 14:53:38 crc kubenswrapper[4982]: I0224 14:53:38.738509 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:53:38 crc kubenswrapper[4982]: I0224 14:53:38.738962 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 14:53:38 crc kubenswrapper[4982]: I0224 14:53:38.739011 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9" gracePeriod=600 Feb 24 14:53:39 crc kubenswrapper[4982]: I0224 14:53:39.116994 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gkfkc"] Feb 24 14:53:40 crc kubenswrapper[4982]: I0224 14:53:40.521163 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9" exitCode=0 Feb 24 14:53:40 crc kubenswrapper[4982]: I0224 14:53:40.521580 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gkfkc" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerName="registry-server" containerID="cri-o://3ae72ab6e3dbdbba61012364aa169afa70130dfcbd97c5ae047d98d2659c2ad3" gracePeriod=2 Feb 24 14:53:40 crc kubenswrapper[4982]: I0224 14:53:40.521716 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9"} Feb 24 14:53:41 crc kubenswrapper[4982]: I0224 14:53:41.535938 4982 generic.go:334] "Generic (PLEG): container finished" podID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerID="3ae72ab6e3dbdbba61012364aa169afa70130dfcbd97c5ae047d98d2659c2ad3" exitCode=0 Feb 24 14:53:41 crc kubenswrapper[4982]: I0224 14:53:41.536748 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkfkc" event={"ID":"0c9513b8-113e-4a7d-8c23-b5596888d66a","Type":"ContainerDied","Data":"3ae72ab6e3dbdbba61012364aa169afa70130dfcbd97c5ae047d98d2659c2ad3"} Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.225049 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.346666 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-utilities\") pod \"0c9513b8-113e-4a7d-8c23-b5596888d66a\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.346759 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4xxz\" (UniqueName: \"kubernetes.io/projected/0c9513b8-113e-4a7d-8c23-b5596888d66a-kube-api-access-b4xxz\") pod \"0c9513b8-113e-4a7d-8c23-b5596888d66a\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.346824 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-catalog-content\") pod \"0c9513b8-113e-4a7d-8c23-b5596888d66a\" (UID: \"0c9513b8-113e-4a7d-8c23-b5596888d66a\") " Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.348280 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-utilities" (OuterVolumeSpecName: "utilities") pod "0c9513b8-113e-4a7d-8c23-b5596888d66a" (UID: "0c9513b8-113e-4a7d-8c23-b5596888d66a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.353642 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9513b8-113e-4a7d-8c23-b5596888d66a-kube-api-access-b4xxz" (OuterVolumeSpecName: "kube-api-access-b4xxz") pod "0c9513b8-113e-4a7d-8c23-b5596888d66a" (UID: "0c9513b8-113e-4a7d-8c23-b5596888d66a"). InnerVolumeSpecName "kube-api-access-b4xxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.450064 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.450164 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4xxz\" (UniqueName: \"kubernetes.io/projected/0c9513b8-113e-4a7d-8c23-b5596888d66a-kube-api-access-b4xxz\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.540024 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c9513b8-113e-4a7d-8c23-b5596888d66a" (UID: "0c9513b8-113e-4a7d-8c23-b5596888d66a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.546272 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gkfkc" event={"ID":"0c9513b8-113e-4a7d-8c23-b5596888d66a","Type":"ContainerDied","Data":"07de65f0157c93e2d98b7b09e15ed9f58436eb8497b30442d5ab66e03e99cd17"} Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.546359 4982 scope.go:117] "RemoveContainer" containerID="3ae72ab6e3dbdbba61012364aa169afa70130dfcbd97c5ae047d98d2659c2ad3" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.546458 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gkfkc" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.551629 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9513b8-113e-4a7d-8c23-b5596888d66a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.597100 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gkfkc"] Feb 24 14:53:42 crc kubenswrapper[4982]: I0224 14:53:42.603202 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gkfkc"] Feb 24 14:53:43 crc kubenswrapper[4982]: I0224 14:53:43.157998 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" path="/var/lib/kubelet/pods/0c9513b8-113e-4a7d-8c23-b5596888d66a/volumes" Feb 24 14:53:43 crc kubenswrapper[4982]: I0224 14:53:43.181441 4982 scope.go:117] "RemoveContainer" containerID="3902f61bec0daa63db6eaed533d98df8018343a68b54ce6229a4772bba523bb0" Feb 24 14:53:44 crc kubenswrapper[4982]: I0224 14:53:44.274632 4982 scope.go:117] "RemoveContainer" containerID="9c8866c562c405c05587099ea9d575612c2e92b2c349f87fc9e6ce3a861d2741" Feb 24 14:53:44 crc kubenswrapper[4982]: I0224 14:53:44.570350 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"5f71e224afa19708cf06f60785deb400aee56bae1714124867d30c9a242dd993"} Feb 24 14:53:45 crc kubenswrapper[4982]: I0224 14:53:45.581269 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrnhb" event={"ID":"ef70a241-8f38-4315-a1c9-a6df74030a41","Type":"ContainerStarted","Data":"1d548731d1e41576669d368040708737f8bb6455b73b6ddbd923cab81b62af28"} Feb 24 14:53:45 crc kubenswrapper[4982]: I0224 14:53:45.583105 4982 generic.go:334] "Generic (PLEG): container finished" podID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerID="229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f" exitCode=0 Feb 24 14:53:45 crc kubenswrapper[4982]: I0224 14:53:45.583174 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stdjb" event={"ID":"e41be673-ff4a-465b-a472-22f962fbf6ed","Type":"ContainerDied","Data":"229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f"} Feb 24 14:53:45 crc kubenswrapper[4982]: I0224 14:53:45.592612 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vndg2" event={"ID":"ddc9427b-029e-49c4-bce0-b2d40b9259c8","Type":"ContainerStarted","Data":"ad027a7c2604115a76c210314f8e96e20608e50676104392ff08f014a99e575c"} Feb 24 14:53:45 crc kubenswrapper[4982]: I0224 14:53:45.610817 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qrnhb" podStartSLOduration=3.451961997 podStartE2EDuration="1m11.6107858s" podCreationTimestamp="2026-02-24 14:52:34 +0000 UTC" firstStartedPulling="2026-02-24 14:52:36.11598075 +0000 UTC m=+217.735039243" lastFinishedPulling="2026-02-24 14:53:44.274804513 +0000 UTC m=+285.893863046" observedRunningTime="2026-02-24 14:53:45.603346814 +0000 UTC m=+287.222405347" watchObservedRunningTime="2026-02-24 14:53:45.6107858 +0000 UTC m=+287.229844333" Feb 24 14:53:46 crc kubenswrapper[4982]: I0224 14:53:46.607105 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stdjb" event={"ID":"e41be673-ff4a-465b-a472-22f962fbf6ed","Type":"ContainerStarted","Data":"49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761"} Feb 24 14:53:46 crc kubenswrapper[4982]: I0224 14:53:46.610481 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv7hr" event={"ID":"5355d669-4f87-48b6-b389-09f97979f9c6","Type":"ContainerStarted","Data":"036a30027029037d6092d3d4e4928cf1c66a5b79165d02d0ca47fb2dbbfa24be"} Feb 24 14:53:46 crc kubenswrapper[4982]: I0224 14:53:46.638174 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stdjb" podStartSLOduration=2.703387454 podStartE2EDuration="1m12.638156694s" podCreationTimestamp="2026-02-24 14:52:34 +0000 UTC" firstStartedPulling="2026-02-24 14:52:36.12043018 +0000 UTC m=+217.739488673" lastFinishedPulling="2026-02-24 14:53:46.05519938 +0000 UTC m=+287.674257913" observedRunningTime="2026-02-24 14:53:46.636730905 +0000 UTC m=+288.255789418" watchObservedRunningTime="2026-02-24 14:53:46.638156694 +0000 UTC m=+288.257215187" Feb 24 14:53:46 crc kubenswrapper[4982]: I0224 14:53:46.667117 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vndg2" podStartSLOduration=4.582671997 podStartE2EDuration="1m12.667098849s" podCreationTimestamp="2026-02-24 14:52:34 +0000 UTC" firstStartedPulling="2026-02-24 14:52:35.097102774 +0000 UTC m=+216.716161267" lastFinishedPulling="2026-02-24 14:53:43.181529586 +0000 UTC m=+284.800588119" observedRunningTime="2026-02-24 14:53:46.663650153 +0000 UTC m=+288.282708656" watchObservedRunningTime="2026-02-24 14:53:46.667098849 +0000 UTC m=+288.286157342" Feb 24 14:53:46 crc kubenswrapper[4982]: I0224 14:53:46.689603 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bv7hr" podStartSLOduration=3.153381487 podStartE2EDuration="1m12.689588204s" podCreationTimestamp="2026-02-24 14:52:34 +0000 UTC" firstStartedPulling="2026-02-24 14:52:36.123986647 +0000 UTC m=+217.743045140" lastFinishedPulling="2026-02-24 14:53:45.660193364 +0000 UTC m=+287.279251857" observedRunningTime="2026-02-24 14:53:46.687648121 +0000 UTC m=+288.306706634" watchObservedRunningTime="2026-02-24 14:53:46.689588204 +0000 UTC m=+288.308646697" Feb 24 14:53:47 crc kubenswrapper[4982]: I0224 14:53:47.622309 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:53:47 crc kubenswrapper[4982]: I0224 14:53:47.656164 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:53:48 crc kubenswrapper[4982]: I0224 14:53:48.816982 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr"] Feb 24 14:53:48 crc kubenswrapper[4982]: I0224 14:53:48.817826 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" podUID="4e950fab-d108-486f-83de-8c085384be65" containerName="controller-manager" containerID="cri-o://0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298" gracePeriod=30 Feb 24 14:53:48 crc kubenswrapper[4982]: I0224 14:53:48.908354 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599554976-8nzdt"] Feb 24 14:53:48 crc kubenswrapper[4982]: I0224 14:53:48.908746 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" podUID="133dbd75-9538-4a08-8fb6-faa198403f55" containerName="route-controller-manager" containerID="cri-o://3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612" gracePeriod=30 Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.372872 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.375954 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.487911 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-client-ca\") pod \"133dbd75-9538-4a08-8fb6-faa198403f55\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.487983 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-config\") pod \"133dbd75-9538-4a08-8fb6-faa198403f55\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.488047 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp457\" (UniqueName: \"kubernetes.io/projected/4e950fab-d108-486f-83de-8c085384be65-kube-api-access-rp457\") pod \"4e950fab-d108-486f-83de-8c085384be65\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.488108 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/133dbd75-9538-4a08-8fb6-faa198403f55-serving-cert\") pod \"133dbd75-9538-4a08-8fb6-faa198403f55\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.488139 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-client-ca\") pod \"4e950fab-d108-486f-83de-8c085384be65\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.488179 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-config\") pod \"4e950fab-d108-486f-83de-8c085384be65\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.488227 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9v4k\" (UniqueName: \"kubernetes.io/projected/133dbd75-9538-4a08-8fb6-faa198403f55-kube-api-access-d9v4k\") pod \"133dbd75-9538-4a08-8fb6-faa198403f55\" (UID: \"133dbd75-9538-4a08-8fb6-faa198403f55\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.488257 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e950fab-d108-486f-83de-8c085384be65-serving-cert\") pod \"4e950fab-d108-486f-83de-8c085384be65\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.488294 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-proxy-ca-bundles\") pod \"4e950fab-d108-486f-83de-8c085384be65\" (UID: \"4e950fab-d108-486f-83de-8c085384be65\") " Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.489269 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-client-ca" (OuterVolumeSpecName: "client-ca") pod "133dbd75-9538-4a08-8fb6-faa198403f55" (UID: "133dbd75-9538-4a08-8fb6-faa198403f55"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.489405 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-config" (OuterVolumeSpecName: "config") pod "133dbd75-9538-4a08-8fb6-faa198403f55" (UID: "133dbd75-9538-4a08-8fb6-faa198403f55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.489996 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4e950fab-d108-486f-83de-8c085384be65" (UID: "4e950fab-d108-486f-83de-8c085384be65"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.490112 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e950fab-d108-486f-83de-8c085384be65" (UID: "4e950fab-d108-486f-83de-8c085384be65"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.490660 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-config" (OuterVolumeSpecName: "config") pod "4e950fab-d108-486f-83de-8c085384be65" (UID: "4e950fab-d108-486f-83de-8c085384be65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.494323 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e950fab-d108-486f-83de-8c085384be65-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e950fab-d108-486f-83de-8c085384be65" (UID: "4e950fab-d108-486f-83de-8c085384be65"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.494592 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133dbd75-9538-4a08-8fb6-faa198403f55-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "133dbd75-9538-4a08-8fb6-faa198403f55" (UID: "133dbd75-9538-4a08-8fb6-faa198403f55"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.495389 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133dbd75-9538-4a08-8fb6-faa198403f55-kube-api-access-d9v4k" (OuterVolumeSpecName: "kube-api-access-d9v4k") pod "133dbd75-9538-4a08-8fb6-faa198403f55" (UID: "133dbd75-9538-4a08-8fb6-faa198403f55"). InnerVolumeSpecName "kube-api-access-d9v4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.495592 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e950fab-d108-486f-83de-8c085384be65-kube-api-access-rp457" (OuterVolumeSpecName: "kube-api-access-rp457") pod "4e950fab-d108-486f-83de-8c085384be65" (UID: "4e950fab-d108-486f-83de-8c085384be65"). InnerVolumeSpecName "kube-api-access-rp457". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589633 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp457\" (UniqueName: \"kubernetes.io/projected/4e950fab-d108-486f-83de-8c085384be65-kube-api-access-rp457\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589667 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/133dbd75-9538-4a08-8fb6-faa198403f55-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589676 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589685 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589694 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9v4k\" (UniqueName: \"kubernetes.io/projected/133dbd75-9538-4a08-8fb6-faa198403f55-kube-api-access-d9v4k\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589702 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e950fab-d108-486f-83de-8c085384be65-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589709 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e950fab-d108-486f-83de-8c085384be65-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589718 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.589726 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/133dbd75-9538-4a08-8fb6-faa198403f55-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.632384 4982 generic.go:334] "Generic (PLEG): container finished" podID="133dbd75-9538-4a08-8fb6-faa198403f55" containerID="3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612" exitCode=0 Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.632461 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" event={"ID":"133dbd75-9538-4a08-8fb6-faa198403f55","Type":"ContainerDied","Data":"3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612"} Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.632469 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.632675 4982 scope.go:117] "RemoveContainer" containerID="3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.632490 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-599554976-8nzdt" event={"ID":"133dbd75-9538-4a08-8fb6-faa198403f55","Type":"ContainerDied","Data":"b85a9087a2fe2665d27d8141442df47408977972e40b32cbca4f77239729edb3"} Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.634921 4982 generic.go:334] "Generic (PLEG): container finished" podID="4e950fab-d108-486f-83de-8c085384be65" containerID="0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298" exitCode=0 Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.634983 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" event={"ID":"4e950fab-d108-486f-83de-8c085384be65","Type":"ContainerDied","Data":"0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298"} Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.635034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" event={"ID":"4e950fab-d108-486f-83de-8c085384be65","Type":"ContainerDied","Data":"357685c836ebee2ca5f8bc126ec969ed0e3240e8abdb0fc2f7e9fbf9483f868d"} Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.635138 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.658759 4982 scope.go:117] "RemoveContainer" containerID="3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612" Feb 24 14:53:49 crc kubenswrapper[4982]: E0224 14:53:49.662030 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612\": container with ID starting with 3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612 not found: ID does not exist" containerID="3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.662195 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612"} err="failed to get container status \"3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612\": rpc error: code = NotFound desc = could not find container \"3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612\": container with ID starting with 3f917625ba7d6ebaaf5ebdb59b72650daabbeaaa8313101d69bac9f785515612 not found: ID does not exist" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.662247 4982 scope.go:117] "RemoveContainer" containerID="0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.672715 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599554976-8nzdt"] Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.681997 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599554976-8nzdt"] Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.689004 4982 scope.go:117] "RemoveContainer" containerID="0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.691288 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr"] Feb 24 14:53:49 crc kubenswrapper[4982]: E0224 14:53:49.691946 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298\": container with ID starting with 0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298 not found: ID does not exist" containerID="0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.691977 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298"} err="failed to get container status \"0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298\": rpc error: code = NotFound desc = could not find container \"0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298\": container with ID starting with 0d896386589824cd6178a8f83dc1bda81324708eddd96e9924340fb21c0ed298 not found: ID does not exist" Feb 24 14:53:49 crc kubenswrapper[4982]: I0224 14:53:49.694438 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bdbc65db5-lfgjr"] Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.037666 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx"] Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.038733 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerName="registry-server" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.038761 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerName="registry-server" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.038799 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerName="extract-utilities" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.038813 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerName="extract-utilities" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.038832 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b9348a-b44f-4ecf-9043-0948b992d64e" containerName="oc" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.038845 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b9348a-b44f-4ecf-9043-0948b992d64e" containerName="oc" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.038867 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c357ab0-449d-4192-8e06-c09c0f8b59c6" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.038881 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c357ab0-449d-4192-8e06-c09c0f8b59c6" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.038897 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerName="extract-content" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.038909 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerName="extract-content" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.038929 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerName="extract-utilities" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.038942 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerName="extract-utilities" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.038958 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262ce385-a8fb-4909-a456-7eeea3f77634" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.038974 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="262ce385-a8fb-4909-a456-7eeea3f77634" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.038992 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e950fab-d108-486f-83de-8c085384be65" containerName="controller-manager" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039008 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e950fab-d108-486f-83de-8c085384be65" containerName="controller-manager" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.039022 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133dbd75-9538-4a08-8fb6-faa198403f55" containerName="route-controller-manager" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039035 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="133dbd75-9538-4a08-8fb6-faa198403f55" containerName="route-controller-manager" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.039049 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerName="registry-server" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039061 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerName="registry-server" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.039076 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerName="extract-content" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039088 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerName="extract-content" Feb 24 14:53:50 crc kubenswrapper[4982]: E0224 14:53:50.039111 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba93aa6-3011-4955-9efe-d53362976685" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039123 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba93aa6-3011-4955-9efe-d53362976685" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039309 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a881e66-97bc-43af-8be8-1cf11ec61b72" containerName="registry-server" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039327 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b9348a-b44f-4ecf-9043-0948b992d64e" containerName="oc" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039348 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="133dbd75-9538-4a08-8fb6-faa198403f55" containerName="route-controller-manager" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039367 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba93aa6-3011-4955-9efe-d53362976685" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039390 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="262ce385-a8fb-4909-a456-7eeea3f77634" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039407 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e950fab-d108-486f-83de-8c085384be65" containerName="controller-manager" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039435 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9513b8-113e-4a7d-8c23-b5596888d66a" containerName="registry-server" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.039453 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c357ab0-449d-4192-8e06-c09c0f8b59c6" containerName="pruner" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.040563 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.046243 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.046560 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj"] Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.046777 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.046870 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.047208 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.047229 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.047239 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.048089 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.052986 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.053260 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.053639 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.054423 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.054694 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.054977 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.055728 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx"] Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.061644 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj"] Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.072042 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.196134 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-client-ca\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.196185 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-serving-cert\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.196216 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5324b1-ddf4-4216-9d76-43d60b13cc06-serving-cert\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.196259 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-config\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.196315 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-client-ca\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.196636 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr5pq\" (UniqueName: \"kubernetes.io/projected/1d5324b1-ddf4-4216-9d76-43d60b13cc06-kube-api-access-jr5pq\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.196779 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-proxy-ca-bundles\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.196949 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fszn\" (UniqueName: \"kubernetes.io/projected/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-kube-api-access-5fszn\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.197061 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-config\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.298304 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-client-ca\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.299821 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-serving-cert\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.299889 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5324b1-ddf4-4216-9d76-43d60b13cc06-serving-cert\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.300002 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-config\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.300035 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-client-ca\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.300085 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr5pq\" (UniqueName: \"kubernetes.io/projected/1d5324b1-ddf4-4216-9d76-43d60b13cc06-kube-api-access-jr5pq\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.300119 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-client-ca\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.300143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-proxy-ca-bundles\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.300265 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fszn\" (UniqueName: \"kubernetes.io/projected/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-kube-api-access-5fszn\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.300339 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-config\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.301950 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-client-ca\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.302380 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-config\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.302488 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-proxy-ca-bundles\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.302675 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-config\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.312436 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-serving-cert\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.312788 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5324b1-ddf4-4216-9d76-43d60b13cc06-serving-cert\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.327580 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr5pq\" (UniqueName: \"kubernetes.io/projected/1d5324b1-ddf4-4216-9d76-43d60b13cc06-kube-api-access-jr5pq\") pod \"controller-manager-5c967dc9f7-gqfvj\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.335783 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fszn\" (UniqueName: \"kubernetes.io/projected/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-kube-api-access-5fszn\") pod \"route-controller-manager-55fbb658d-jgzvx\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.387552 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.406946 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.672776 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj"] Feb 24 14:53:50 crc kubenswrapper[4982]: W0224 14:53:50.678752 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d5324b1_ddf4_4216_9d76_43d60b13cc06.slice/crio-11365907fc6069f11b6d491abbba21d9135ceddbb9331284201a5b9775da97d4 WatchSource:0}: Error finding container 11365907fc6069f11b6d491abbba21d9135ceddbb9331284201a5b9775da97d4: Status 404 returned error can't find the container with id 11365907fc6069f11b6d491abbba21d9135ceddbb9331284201a5b9775da97d4 Feb 24 14:53:50 crc kubenswrapper[4982]: W0224 14:53:50.848325 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97c93757_cb7f_4278_8bfb_5fa3a5ebd512.slice/crio-27d71489e44e8ea761bd8fd15d795d53cd3f00651e88d5a08d3a54322d559a51 WatchSource:0}: Error finding container 27d71489e44e8ea761bd8fd15d795d53cd3f00651e88d5a08d3a54322d559a51: Status 404 returned error can't find the container with id 27d71489e44e8ea761bd8fd15d795d53cd3f00651e88d5a08d3a54322d559a51 Feb 24 14:53:50 crc kubenswrapper[4982]: I0224 14:53:50.850044 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx"] Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.153270 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133dbd75-9538-4a08-8fb6-faa198403f55" path="/var/lib/kubelet/pods/133dbd75-9538-4a08-8fb6-faa198403f55/volumes" Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.154078 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e950fab-d108-486f-83de-8c085384be65" path="/var/lib/kubelet/pods/4e950fab-d108-486f-83de-8c085384be65/volumes" Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.652968 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" event={"ID":"97c93757-cb7f-4278-8bfb-5fa3a5ebd512","Type":"ContainerStarted","Data":"aa69eeac4d4030f6af5e506a3e857dd58e196a57ff696fce93e14e29abdb1b40"} Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.653307 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.653318 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" event={"ID":"97c93757-cb7f-4278-8bfb-5fa3a5ebd512","Type":"ContainerStarted","Data":"27d71489e44e8ea761bd8fd15d795d53cd3f00651e88d5a08d3a54322d559a51"} Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.655619 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" event={"ID":"1d5324b1-ddf4-4216-9d76-43d60b13cc06","Type":"ContainerStarted","Data":"1c05b015e6ffa5caec7cd4cb582c11a07c6afe13785b82e651b8d694617d7020"} Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.655664 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" event={"ID":"1d5324b1-ddf4-4216-9d76-43d60b13cc06","Type":"ContainerStarted","Data":"11365907fc6069f11b6d491abbba21d9135ceddbb9331284201a5b9775da97d4"} Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.655949 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.660917 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.661542 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.682617 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" podStartSLOduration=3.682589533 podStartE2EDuration="3.682589533s" podCreationTimestamp="2026-02-24 14:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:53:51.674822868 +0000 UTC m=+293.293881351" watchObservedRunningTime="2026-02-24 14:53:51.682589533 +0000 UTC m=+293.301648056" Feb 24 14:53:51 crc kubenswrapper[4982]: I0224 14:53:51.720447 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" podStartSLOduration=3.720423935 podStartE2EDuration="3.720423935s" podCreationTimestamp="2026-02-24 14:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:53:51.719899511 +0000 UTC m=+293.338958004" watchObservedRunningTime="2026-02-24 14:53:51.720423935 +0000 UTC m=+293.339482468" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.348179 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.349339 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.422324 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.567305 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.567408 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.635084 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.736271 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.737906 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.765639 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.765670 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.845658 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.966846 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:53:54 crc kubenswrapper[4982]: I0224 14:53:54.966945 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:53:55 crc kubenswrapper[4982]: I0224 14:53:55.022440 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:53:55 crc kubenswrapper[4982]: I0224 14:53:55.746031 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:53:55 crc kubenswrapper[4982]: I0224 14:53:55.756257 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.967111 4982 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.967955 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971439 4982 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971481 4982 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.971775 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971802 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.971818 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971826 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.971839 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971846 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.971854 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971861 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.971871 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971879 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.971892 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971900 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.971908 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.971915 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972019 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972032 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972043 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972051 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972063 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972074 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972281 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256" gracePeriod=15 Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.972711 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972739 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: E0224 14:53:56.972757 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972768 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972902 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.972916 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.973261 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8" gracePeriod=15 Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.973424 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e" gracePeriod=15 Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.973469 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4" gracePeriod=15 Feb 24 14:53:56 crc kubenswrapper[4982]: I0224 14:53:56.973571 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90" gracePeriod=15 Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.018108 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-f5kdd"] Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.104263 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.104381 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.104406 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.104429 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.104444 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.104477 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.104517 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.104535 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.107046 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205355 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205402 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205422 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205454 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205471 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205486 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205553 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205567 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205589 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205609 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205537 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205637 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205688 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205852 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.205909 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.206339 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.405795 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:53:57 crc kubenswrapper[4982]: W0224 14:53:57.425017 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-14c7244434c44b3a4295ef1cee834f734383adab2ca84b3c3f95f58f2c94c319 WatchSource:0}: Error finding container 14c7244434c44b3a4295ef1cee834f734383adab2ca84b3c3f95f58f2c94c319: Status 404 returned error can't find the container with id 14c7244434c44b3a4295ef1cee834f734383adab2ca84b3c3f95f58f2c94c319 Feb 24 14:53:57 crc kubenswrapper[4982]: E0224 14:53:57.427947 4982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189736710470988a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:53:57.427271818 +0000 UTC m=+299.046330351,LastTimestamp:2026-02-24 14:53:57.427271818 +0000 UTC m=+299.046330351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.699095 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"14c7244434c44b3a4295ef1cee834f734383adab2ca84b3c3f95f58f2c94c319"} Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.701572 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.702768 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.703578 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8" exitCode=0 Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.703608 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4" exitCode=0 Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.703620 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e" exitCode=0 Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.703629 4982 scope.go:117] "RemoveContainer" containerID="482f8637a096e4a2a90c964128ce4a14e71e846aafcde973e58ada134d5e13ee" Feb 24 14:53:57 crc kubenswrapper[4982]: I0224 14:53:57.703630 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90" exitCode=2 Feb 24 14:53:58 crc kubenswrapper[4982]: I0224 14:53:58.715662 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0c9bf3e712ada87a63ef2870877b499c03076930e9d7a74c367119890d3f7854"} Feb 24 14:53:58 crc kubenswrapper[4982]: I0224 14:53:58.720916 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.447068 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.449097 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.554821 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.554885 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.554953 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.554953 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.554981 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.555055 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.555486 4982 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.555531 4982 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.555544 4982 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.736340 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.738955 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256" exitCode=0 Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.739133 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.739138 4982 scope.go:117] "RemoveContainer" containerID="7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.768950 4982 scope.go:117] "RemoveContainer" containerID="df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.793569 4982 scope.go:117] "RemoveContainer" containerID="d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.814776 4982 scope.go:117] "RemoveContainer" containerID="5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.834550 4982 scope.go:117] "RemoveContainer" containerID="5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.861370 4982 scope.go:117] "RemoveContainer" containerID="e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.888830 4982 scope.go:117] "RemoveContainer" containerID="7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8" Feb 24 14:53:59 crc kubenswrapper[4982]: E0224 14:53:59.889447 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\": container with ID starting with 7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8 not found: ID does not exist" containerID="7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.889530 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8"} err="failed to get container status \"7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\": rpc error: code = NotFound desc = could not find container \"7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8\": container with ID starting with 7d0aabbb4abce2b511cc2cc22b79ab746a7371e6280e2d719ac95ff3f19d21a8 not found: ID does not exist" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.889567 4982 scope.go:117] "RemoveContainer" containerID="df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4" Feb 24 14:53:59 crc kubenswrapper[4982]: E0224 14:53:59.890141 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\": container with ID starting with df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4 not found: ID does not exist" containerID="df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.890194 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4"} err="failed to get container status \"df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\": rpc error: code = NotFound desc = could not find container \"df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4\": container with ID starting with df2129b877573f651cdb1343123bbdb2bf927804cb5d349242a64fb2aee966e4 not found: ID does not exist" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.890234 4982 scope.go:117] "RemoveContainer" containerID="d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e" Feb 24 14:53:59 crc kubenswrapper[4982]: E0224 14:53:59.890636 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\": container with ID starting with d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e not found: ID does not exist" containerID="d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.890663 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e"} err="failed to get container status \"d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\": rpc error: code = NotFound desc = could not find container \"d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e\": container with ID starting with d40f1ea35828b7ddb75554ffe7bdc123933730885a1cc910cb55595271fc922e not found: ID does not exist" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.890682 4982 scope.go:117] "RemoveContainer" containerID="5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90" Feb 24 14:53:59 crc kubenswrapper[4982]: E0224 14:53:59.891211 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\": container with ID starting with 5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90 not found: ID does not exist" containerID="5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.891237 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90"} err="failed to get container status \"5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\": rpc error: code = NotFound desc = could not find container \"5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90\": container with ID starting with 5e0aa3776de8ae25af9155de0596e87c6f769d2613639d9e88509ea463ac5c90 not found: ID does not exist" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.891256 4982 scope.go:117] "RemoveContainer" containerID="5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256" Feb 24 14:53:59 crc kubenswrapper[4982]: E0224 14:53:59.891733 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\": container with ID starting with 5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256 not found: ID does not exist" containerID="5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.891765 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256"} err="failed to get container status \"5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\": rpc error: code = NotFound desc = could not find container \"5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256\": container with ID starting with 5e6ba3f4c3f19df9c26fdc0ee7c67b1dbca7c49f6b7bd50774aef2328040a256 not found: ID does not exist" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.891786 4982 scope.go:117] "RemoveContainer" containerID="e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10" Feb 24 14:53:59 crc kubenswrapper[4982]: E0224 14:53:59.892198 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\": container with ID starting with e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10 not found: ID does not exist" containerID="e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10" Feb 24 14:53:59 crc kubenswrapper[4982]: I0224 14:53:59.892226 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10"} err="failed to get container status \"e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\": rpc error: code = NotFound desc = could not find container \"e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10\": container with ID starting with e6f9fa29791bdbab6f127ba7cc57fcdd78f3dbcc4937429a4e22b07aae0dca10 not found: ID does not exist" Feb 24 14:54:01 crc kubenswrapper[4982]: I0224 14:54:01.158890 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 24 14:54:01 crc kubenswrapper[4982]: E0224 14:54:01.925348 4982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189736710470988a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 14:53:57.427271818 +0000 UTC m=+299.046330351,LastTimestamp:2026-02-24 14:53:57.427271818 +0000 UTC m=+299.046330351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 14:54:02 crc kubenswrapper[4982]: I0224 14:54:02.032029 4982 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:02 crc kubenswrapper[4982]: I0224 14:54:02.032792 4982 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:02 crc kubenswrapper[4982]: I0224 14:54:02.038056 4982 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:02 crc kubenswrapper[4982]: I0224 14:54:02.773208 4982 generic.go:334] "Generic (PLEG): container finished" podID="875ece47-3110-4574-9725-0d76af8a8498" containerID="3eb48e187d946ec8603bf9f09151fc5cedc637f631394f5292966fa809de1357" exitCode=0 Feb 24 14:54:02 crc kubenswrapper[4982]: I0224 14:54:02.773353 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"875ece47-3110-4574-9725-0d76af8a8498","Type":"ContainerDied","Data":"3eb48e187d946ec8603bf9f09151fc5cedc637f631394f5292966fa809de1357"} Feb 24 14:54:02 crc kubenswrapper[4982]: I0224 14:54:02.774363 4982 status_manager.go:851] "Failed to get status for pod" podUID="875ece47-3110-4574-9725-0d76af8a8498" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:02 crc kubenswrapper[4982]: I0224 14:54:02.775124 4982 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.256454 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.257631 4982 status_manager.go:851] "Failed to get status for pod" podUID="875ece47-3110-4574-9725-0d76af8a8498" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.258169 4982 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.448980 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-var-lock\") pod \"875ece47-3110-4574-9725-0d76af8a8498\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.449082 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-kubelet-dir\") pod \"875ece47-3110-4574-9725-0d76af8a8498\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.449166 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/875ece47-3110-4574-9725-0d76af8a8498-kube-api-access\") pod \"875ece47-3110-4574-9725-0d76af8a8498\" (UID: \"875ece47-3110-4574-9725-0d76af8a8498\") " Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.449283 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "875ece47-3110-4574-9725-0d76af8a8498" (UID: "875ece47-3110-4574-9725-0d76af8a8498"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.449284 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-var-lock" (OuterVolumeSpecName: "var-lock") pod "875ece47-3110-4574-9725-0d76af8a8498" (UID: "875ece47-3110-4574-9725-0d76af8a8498"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.449852 4982 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.449892 4982 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/875ece47-3110-4574-9725-0d76af8a8498-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.459357 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875ece47-3110-4574-9725-0d76af8a8498-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "875ece47-3110-4574-9725-0d76af8a8498" (UID: "875ece47-3110-4574-9725-0d76af8a8498"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:54:04 crc kubenswrapper[4982]: E0224 14:54:04.485836 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: E0224 14:54:04.486406 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: E0224 14:54:04.488128 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: E0224 14:54:04.489439 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: E0224 14:54:04.490192 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.490248 4982 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 14:54:04 crc kubenswrapper[4982]: E0224 14:54:04.490802 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.551034 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/875ece47-3110-4574-9725-0d76af8a8498-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:04 crc kubenswrapper[4982]: E0224 14:54:04.692447 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.789991 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"875ece47-3110-4574-9725-0d76af8a8498","Type":"ContainerDied","Data":"77eaf0feaea2c5426a276287fac8ed8eb3c047345f0714b4a12e1f40e36f7bcb"} Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.790059 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77eaf0feaea2c5426a276287fac8ed8eb3c047345f0714b4a12e1f40e36f7bcb" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.790092 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.815815 4982 status_manager.go:851] "Failed to get status for pod" podUID="875ece47-3110-4574-9725-0d76af8a8498" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:04 crc kubenswrapper[4982]: I0224 14:54:04.816206 4982 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:05 crc kubenswrapper[4982]: E0224 14:54:05.093472 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Feb 24 14:54:05 crc kubenswrapper[4982]: E0224 14:54:05.894552 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Feb 24 14:54:07 crc kubenswrapper[4982]: E0224 14:54:07.188543 4982 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" volumeName="registry-storage" Feb 24 14:54:07 crc kubenswrapper[4982]: E0224 14:54:07.266233 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:54:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:54:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:54:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T14:54:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:07 crc kubenswrapper[4982]: E0224 14:54:07.266773 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:07 crc kubenswrapper[4982]: E0224 14:54:07.267241 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:07 crc kubenswrapper[4982]: E0224 14:54:07.268058 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:07 crc kubenswrapper[4982]: E0224 14:54:07.268639 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:07 crc kubenswrapper[4982]: E0224 14:54:07.268676 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 14:54:07 crc kubenswrapper[4982]: E0224 14:54:07.495322 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.145270 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.146073 4982 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.146528 4982 status_manager.go:851] "Failed to get status for pod" podUID="875ece47-3110-4574-9725-0d76af8a8498" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.163670 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.163694 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:08 crc kubenswrapper[4982]: E0224 14:54:08.163942 4982 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.164387 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:08 crc kubenswrapper[4982]: W0224 14:54:08.192893 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-84383f72d469051bc0c549786ec3e898131cc9d74d22227552ce9fead2746ed6 WatchSource:0}: Error finding container 84383f72d469051bc0c549786ec3e898131cc9d74d22227552ce9fead2746ed6: Status 404 returned error can't find the container with id 84383f72d469051bc0c549786ec3e898131cc9d74d22227552ce9fead2746ed6 Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.822899 4982 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d522fe2d39dc28e7b44d3df4a2793b145feb48691e3c10b7f151842396c29ac2" exitCode=0 Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.823017 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d522fe2d39dc28e7b44d3df4a2793b145feb48691e3c10b7f151842396c29ac2"} Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.823355 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"84383f72d469051bc0c549786ec3e898131cc9d74d22227552ce9fead2746ed6"} Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.823740 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.823774 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:08 crc kubenswrapper[4982]: E0224 14:54:08.824340 4982 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.824355 4982 status_manager.go:851] "Failed to get status for pod" podUID="875ece47-3110-4574-9725-0d76af8a8498" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:08 crc kubenswrapper[4982]: I0224 14:54:08.824719 4982 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.169780 4982 status_manager.go:851] "Failed to get status for pod" podUID="875ece47-3110-4574-9725-0d76af8a8498" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.170426 4982 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.170930 4982 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.840787 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1cfc344944c401fdbe0b542ead61e232e078443620b5cfd0e4d5e85fb4600a50"} Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.840836 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"227b75a8f0c7bf2c351962bb35b8c3386bda6e5266adda1ae8930463fe1b5ca3"} Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.840848 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"840c9d4ff5e886a29f8900e98c9e6b3f0fd3f13b87a0aeb90eddeda69872bb00"} Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.846100 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.847649 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.847716 4982 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6" exitCode=1 Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.847752 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6"} Feb 24 14:54:09 crc kubenswrapper[4982]: I0224 14:54:09.848248 4982 scope.go:117] "RemoveContainer" containerID="061cef42ed58a1263ee70e785fdb008437a3a8b06f4561a06e00dae024fac6c6" Feb 24 14:54:10 crc kubenswrapper[4982]: I0224 14:54:10.855943 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 14:54:10 crc kubenswrapper[4982]: I0224 14:54:10.856709 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 14:54:10 crc kubenswrapper[4982]: I0224 14:54:10.856765 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d386f5445a444621f19326cc2843e0802c10ee6a3ecb114afb185c6a15baecf4"} Feb 24 14:54:10 crc kubenswrapper[4982]: I0224 14:54:10.859747 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c8cf97ed7573bcba399a69fa6e2705b3db12b3c0e6e837306a076df7b3159b50"} Feb 24 14:54:10 crc kubenswrapper[4982]: I0224 14:54:10.859778 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1fa1f284a2e9cda1dd775cb48d4027dff90a1b47b1e33d82129218b2ed63a60d"} Feb 24 14:54:10 crc kubenswrapper[4982]: I0224 14:54:10.859970 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:10 crc kubenswrapper[4982]: I0224 14:54:10.859992 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:10 crc kubenswrapper[4982]: I0224 14:54:10.860181 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:11 crc kubenswrapper[4982]: I0224 14:54:11.714295 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:54:13 crc kubenswrapper[4982]: I0224 14:54:13.165583 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:13 crc kubenswrapper[4982]: I0224 14:54:13.166011 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:13 crc kubenswrapper[4982]: I0224 14:54:13.173693 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:13 crc kubenswrapper[4982]: I0224 14:54:13.369145 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:54:13 crc kubenswrapper[4982]: I0224 14:54:13.378151 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:54:15 crc kubenswrapper[4982]: I0224 14:54:15.874425 4982 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:16 crc kubenswrapper[4982]: I0224 14:54:16.898115 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:16 crc kubenswrapper[4982]: I0224 14:54:16.898454 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:16 crc kubenswrapper[4982]: I0224 14:54:16.902332 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:16 crc kubenswrapper[4982]: I0224 14:54:16.906780 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f4e01ad2-7e5a-474b-85e8-28dc1f3133e7" Feb 24 14:54:17 crc kubenswrapper[4982]: I0224 14:54:17.905055 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:17 crc kubenswrapper[4982]: I0224 14:54:17.905101 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="612fad56-511c-4961-aab1-974e6a1019ec" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.151036 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.151166 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.151203 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.151237 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.154592 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.156125 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.156350 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.162905 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.168199 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.168745 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.177248 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.177757 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.268037 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.282386 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.295030 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 14:54:18 crc kubenswrapper[4982]: W0224 14:54:18.699017 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-ae4e7cd1fbb25ddad31ff416cfef76f1cb6b234f12ca7eca515d1857b6b1ce56 WatchSource:0}: Error finding container ae4e7cd1fbb25ddad31ff416cfef76f1cb6b234f12ca7eca515d1857b6b1ce56: Status 404 returned error can't find the container with id ae4e7cd1fbb25ddad31ff416cfef76f1cb6b234f12ca7eca515d1857b6b1ce56 Feb 24 14:54:18 crc kubenswrapper[4982]: W0224 14:54:18.789284 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3c49cf49ead84eac3784eeea3b35252dbb113e6f125a0eaa92e2a2cf9cdf9f9f WatchSource:0}: Error finding container 3c49cf49ead84eac3784eeea3b35252dbb113e6f125a0eaa92e2a2cf9cdf9f9f: Status 404 returned error can't find the container with id 3c49cf49ead84eac3784eeea3b35252dbb113e6f125a0eaa92e2a2cf9cdf9f9f Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.915387 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"01758b19ab0e60d5b6641caca338e8bea635c319bb77ee5f422a776d0f4c8e61"} Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.915427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"50d98ceba6126d2ec6620b6226fc8dc25070e5d5cc2926380be0749d0ecd42ed"} Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.917444 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"99e592a698859fd103bb6e98e3d2ca8dae3d4e07d7bcde1e8e51463abd2db585"} Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.917474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ae4e7cd1fbb25ddad31ff416cfef76f1cb6b234f12ca7eca515d1857b6b1ce56"} Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.917982 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.919954 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8e7cc92f6ecf53f9ad764e8767058ff2554441a2270a6d03d40ae296170b421c"} Feb 24 14:54:18 crc kubenswrapper[4982]: I0224 14:54:18.919984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3c49cf49ead84eac3784eeea3b35252dbb113e6f125a0eaa92e2a2cf9cdf9f9f"} Feb 24 14:54:19 crc kubenswrapper[4982]: I0224 14:54:19.181928 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f4e01ad2-7e5a-474b-85e8-28dc1f3133e7" Feb 24 14:54:20 crc kubenswrapper[4982]: I0224 14:54:20.934904 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 24 14:54:20 crc kubenswrapper[4982]: I0224 14:54:20.935152 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"8e7cc92f6ecf53f9ad764e8767058ff2554441a2270a6d03d40ae296170b421c"} Feb 24 14:54:20 crc kubenswrapper[4982]: I0224 14:54:20.935422 4982 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="8e7cc92f6ecf53f9ad764e8767058ff2554441a2270a6d03d40ae296170b421c" exitCode=255 Feb 24 14:54:20 crc kubenswrapper[4982]: I0224 14:54:20.935983 4982 scope.go:117] "RemoveContainer" containerID="8e7cc92f6ecf53f9ad764e8767058ff2554441a2270a6d03d40ae296170b421c" Feb 24 14:54:21 crc kubenswrapper[4982]: I0224 14:54:21.719638 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 14:54:21 crc kubenswrapper[4982]: I0224 14:54:21.944477 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 24 14:54:21 crc kubenswrapper[4982]: I0224 14:54:21.944736 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b5a22f84993bc0de1aab306f315786c5a7d131edec16643bbb7b9d0f9c4cd087"} Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.052634 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" podUID="b36f6a63-d48a-4adb-bdb0-3b63c7679981" containerName="oauth-openshift" containerID="cri-o://c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26" gracePeriod=15 Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.352411 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.550182 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.598356 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.710262 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-service-ca\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.710375 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cj4v\" (UniqueName: \"kubernetes.io/projected/b36f6a63-d48a-4adb-bdb0-3b63c7679981-kube-api-access-4cj4v\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.710430 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-session\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.710763 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-dir\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.710877 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-ocp-branding-template\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.710925 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.710935 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-login\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711016 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711047 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-cliconfig\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711100 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-router-certs\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711185 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-provider-selection\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711226 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-policies\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711276 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-idp-0-file-data\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711330 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-error\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711375 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-serving-cert\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711413 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-trusted-ca-bundle\") pod \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\" (UID: \"b36f6a63-d48a-4adb-bdb0-3b63c7679981\") " Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711416 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.711660 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.712031 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.712061 4982 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.712084 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.712104 4982 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.712740 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.719088 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36f6a63-d48a-4adb-bdb0-3b63c7679981-kube-api-access-4cj4v" (OuterVolumeSpecName: "kube-api-access-4cj4v") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "kube-api-access-4cj4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.719633 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.719754 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.720115 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.720133 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.720264 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.720441 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.720907 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.726712 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b36f6a63-d48a-4adb-bdb0-3b63c7679981" (UID: "b36f6a63-d48a-4adb-bdb0-3b63c7679981"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813158 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813198 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813211 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813226 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cj4v\" (UniqueName: \"kubernetes.io/projected/b36f6a63-d48a-4adb-bdb0-3b63c7679981-kube-api-access-4cj4v\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813239 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813251 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813262 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813274 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813287 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.813301 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b36f6a63-d48a-4adb-bdb0-3b63c7679981-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.954334 4982 generic.go:334] "Generic (PLEG): container finished" podID="b36f6a63-d48a-4adb-bdb0-3b63c7679981" containerID="c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26" exitCode=0 Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.954416 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.954453 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" event={"ID":"b36f6a63-d48a-4adb-bdb0-3b63c7679981","Type":"ContainerDied","Data":"c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26"} Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.954576 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-f5kdd" event={"ID":"b36f6a63-d48a-4adb-bdb0-3b63c7679981","Type":"ContainerDied","Data":"d8fdae7226bfcd7f1c578a08adae79255d45558a03e1e0827cddd09315ba33af"} Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.954628 4982 scope.go:117] "RemoveContainer" containerID="c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.964348 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.965177 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.965231 4982 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="b5a22f84993bc0de1aab306f315786c5a7d131edec16643bbb7b9d0f9c4cd087" exitCode=255 Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.965264 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"b5a22f84993bc0de1aab306f315786c5a7d131edec16643bbb7b9d0f9c4cd087"} Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.965793 4982 scope.go:117] "RemoveContainer" containerID="b5a22f84993bc0de1aab306f315786c5a7d131edec16643bbb7b9d0f9c4cd087" Feb 24 14:54:22 crc kubenswrapper[4982]: E0224 14:54:22.966079 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.991648 4982 scope.go:117] "RemoveContainer" containerID="c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26" Feb 24 14:54:22 crc kubenswrapper[4982]: E0224 14:54:22.992288 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26\": container with ID starting with c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26 not found: ID does not exist" containerID="c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.992512 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26"} err="failed to get container status \"c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26\": rpc error: code = NotFound desc = could not find container \"c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26\": container with ID starting with c2226265295505fd0065b2623045b900a52c684e3015b6028e653da67cc47c26 not found: ID does not exist" Feb 24 14:54:22 crc kubenswrapper[4982]: I0224 14:54:22.992789 4982 scope.go:117] "RemoveContainer" containerID="8e7cc92f6ecf53f9ad764e8767058ff2554441a2270a6d03d40ae296170b421c" Feb 24 14:54:23 crc kubenswrapper[4982]: I0224 14:54:23.292389 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 14:54:23 crc kubenswrapper[4982]: I0224 14:54:23.710129 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 14:54:23 crc kubenswrapper[4982]: I0224 14:54:23.952376 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 14:54:23 crc kubenswrapper[4982]: I0224 14:54:23.974717 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 24 14:54:25 crc kubenswrapper[4982]: I0224 14:54:25.399832 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 14:54:25 crc kubenswrapper[4982]: I0224 14:54:25.605696 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 14:54:25 crc kubenswrapper[4982]: I0224 14:54:25.943710 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 14:54:26 crc kubenswrapper[4982]: I0224 14:54:26.208419 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 14:54:26 crc kubenswrapper[4982]: I0224 14:54:26.348049 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 14:54:26 crc kubenswrapper[4982]: I0224 14:54:26.431814 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 14:54:26 crc kubenswrapper[4982]: I0224 14:54:26.562286 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 14:54:26 crc kubenswrapper[4982]: I0224 14:54:26.792381 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 14:54:27 crc kubenswrapper[4982]: I0224 14:54:27.766781 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 14:54:27 crc kubenswrapper[4982]: I0224 14:54:27.896549 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 14:54:28 crc kubenswrapper[4982]: I0224 14:54:28.201591 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 14:54:28 crc kubenswrapper[4982]: I0224 14:54:28.801633 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 14:54:28 crc kubenswrapper[4982]: I0224 14:54:28.868414 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 14:54:29 crc kubenswrapper[4982]: I0224 14:54:29.260667 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 14:54:29 crc kubenswrapper[4982]: I0224 14:54:29.531590 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 14:54:29 crc kubenswrapper[4982]: I0224 14:54:29.647622 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 14:54:29 crc kubenswrapper[4982]: I0224 14:54:29.744547 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 14:54:29 crc kubenswrapper[4982]: I0224 14:54:29.783657 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 14:54:29 crc kubenswrapper[4982]: I0224 14:54:29.904134 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 14:54:30 crc kubenswrapper[4982]: I0224 14:54:30.107446 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 14:54:30 crc kubenswrapper[4982]: I0224 14:54:30.419412 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 14:54:30 crc kubenswrapper[4982]: I0224 14:54:30.727376 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 14:54:30 crc kubenswrapper[4982]: I0224 14:54:30.788003 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 14:54:30 crc kubenswrapper[4982]: I0224 14:54:30.866980 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 14:54:30 crc kubenswrapper[4982]: I0224 14:54:30.951754 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 14:54:30 crc kubenswrapper[4982]: I0224 14:54:30.999570 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.127288 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.132243 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.333756 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.510477 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.548855 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.586712 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.659626 4982 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.662972 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.664822 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.916884 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 14:54:31 crc kubenswrapper[4982]: I0224 14:54:31.934208 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.024785 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.025666 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.065014 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.189366 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.218391 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.348428 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.473606 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.544666 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.566092 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.624451 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.645547 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.749914 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.771986 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.821612 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.840133 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.905969 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 14:54:32 crc kubenswrapper[4982]: I0224 14:54:32.983439 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.020543 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.024859 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.112723 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.169225 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.336180 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.427561 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.492771 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.541158 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.656403 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.662429 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.822561 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.854865 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.871170 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 14:54:33 crc kubenswrapper[4982]: I0224 14:54:33.911904 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.067605 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.117557 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.131312 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.140847 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.189758 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.218784 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.273253 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.391926 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.532789 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.554920 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.557659 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.759683 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.808223 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.841472 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 14:54:34 crc kubenswrapper[4982]: I0224 14:54:34.992128 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.128473 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.209673 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.238156 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.302095 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.347999 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.393707 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.438380 4982 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.548740 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.577075 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.598138 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.717600 4982 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.741234 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.886115 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.886353 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.906723 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 14:54:35 crc kubenswrapper[4982]: I0224 14:54:35.971627 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.033230 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.055535 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.162047 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.169796 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.216907 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.261430 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.268873 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.302291 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.368047 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.446875 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.639050 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.656019 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.762393 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.885598 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.928648 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.969250 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.981687 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 14:54:36 crc kubenswrapper[4982]: I0224 14:54:36.984297 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.021707 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.029603 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.029902 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.098609 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.118278 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.146107 4982 scope.go:117] "RemoveContainer" containerID="b5a22f84993bc0de1aab306f315786c5a7d131edec16643bbb7b9d0f9c4cd087" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.181550 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.220084 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.304735 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.385197 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.412289 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.416411 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.533266 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.558264 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.804032 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.806881 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.819966 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.883535 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.888061 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.897866 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.902386 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.913607 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.965409 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 14:54:37 crc kubenswrapper[4982]: I0224 14:54:37.969179 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.076129 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.076177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9dfb2c0721c4711d751215ec04dda95abdbab3492bb332a5cbb74613f94917dc"} Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.118983 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.230017 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.277925 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.327860 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.373419 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.458779 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.558844 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.639013 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.642583 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.644421 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.659423 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.672531 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.810311 4982 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.813065 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.813047352 podStartE2EDuration="41.813047352s" podCreationTimestamp="2026-02-24 14:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:54:15.73722311 +0000 UTC m=+317.356281643" watchObservedRunningTime="2026-02-24 14:54:38.813047352 +0000 UTC m=+340.432105835" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.814547 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-f5kdd"] Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.814595 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc","openshift-infra/auto-csr-approver-29532414-26bx7","openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 14:54:38 crc kubenswrapper[4982]: E0224 14:54:38.814759 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875ece47-3110-4574-9725-0d76af8a8498" containerName="installer" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.814776 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="875ece47-3110-4574-9725-0d76af8a8498" containerName="installer" Feb 24 14:54:38 crc kubenswrapper[4982]: E0224 14:54:38.814793 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36f6a63-d48a-4adb-bdb0-3b63c7679981" containerName="oauth-openshift" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.814799 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36f6a63-d48a-4adb-bdb0-3b63c7679981" containerName="oauth-openshift" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.815214 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="875ece47-3110-4574-9725-0d76af8a8498" containerName="installer" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.815252 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36f6a63-d48a-4adb-bdb0-3b63c7679981" containerName="oauth-openshift" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.815716 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.815937 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bv7hr"] Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.816068 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532414-26bx7" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.816180 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bv7hr" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" containerName="registry-server" containerID="cri-o://036a30027029037d6092d3d4e4928cf1c66a5b79165d02d0ca47fb2dbbfa24be" gracePeriod=2 Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.820896 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.821149 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.821784 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.821819 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.822345 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.822987 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.823671 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.823758 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.823948 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.824035 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.824177 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.824278 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.824355 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.824428 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.823996 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.826351 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.828284 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.830996 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.831312 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.847743 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjsc6\" (UniqueName: \"kubernetes.io/projected/652d0a6a-7206-4b47-b9ec-723351b9a056-kube-api-access-cjsc6\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.848269 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-session\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.849565 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.849673 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-error\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.849770 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.849840 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.849920 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-audit-policies\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.849996 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.850067 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/652d0a6a-7206-4b47-b9ec-723351b9a056-audit-dir\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.850152 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.850228 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.850314 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.850423 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.850513 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-login\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.853712 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.856990 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.856966543 podStartE2EDuration="23.856966543s" podCreationTimestamp="2026-02-24 14:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:54:38.851992244 +0000 UTC m=+340.471050737" watchObservedRunningTime="2026-02-24 14:54:38.856966543 +0000 UTC m=+340.476025046" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.887442 4982 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.905883 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.914379 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.952332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.952398 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.952435 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzfdv\" (UniqueName: \"kubernetes.io/projected/cee4a86b-61ae-481f-9744-0635388f16ab-kube-api-access-jzfdv\") pod \"auto-csr-approver-29532414-26bx7\" (UID: \"cee4a86b-61ae-481f-9744-0635388f16ab\") " pod="openshift-infra/auto-csr-approver-29532414-26bx7" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.954262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-audit-policies\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.954263 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.952467 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-audit-policies\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.954354 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.954385 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.954415 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/652d0a6a-7206-4b47-b9ec-723351b9a056-audit-dir\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.954447 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.954478 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.954554 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/652d0a6a-7206-4b47-b9ec-723351b9a056-audit-dir\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.955175 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.955346 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.955787 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.955814 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-login\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.956809 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjsc6\" (UniqueName: \"kubernetes.io/projected/652d0a6a-7206-4b47-b9ec-723351b9a056-kube-api-access-cjsc6\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.956845 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-session\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.956900 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.956927 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-error\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.962072 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.962780 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.963116 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-session\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.964155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-error\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.964237 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.965671 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-user-template-login\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.968740 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.974337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/652d0a6a-7206-4b47-b9ec-723351b9a056-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.978017 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjsc6\" (UniqueName: \"kubernetes.io/projected/652d0a6a-7206-4b47-b9ec-723351b9a056-kube-api-access-cjsc6\") pod \"oauth-openshift-5fdb59f4df-mmsvc\" (UID: \"652d0a6a-7206-4b47-b9ec-723351b9a056\") " pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:38 crc kubenswrapper[4982]: I0224 14:54:38.998037 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.028084 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.056371 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.058052 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzfdv\" (UniqueName: \"kubernetes.io/projected/cee4a86b-61ae-481f-9744-0635388f16ab-kube-api-access-jzfdv\") pod \"auto-csr-approver-29532414-26bx7\" (UID: \"cee4a86b-61ae-481f-9744-0635388f16ab\") " pod="openshift-infra/auto-csr-approver-29532414-26bx7" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.076488 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzfdv\" (UniqueName: \"kubernetes.io/projected/cee4a86b-61ae-481f-9744-0635388f16ab-kube-api-access-jzfdv\") pod \"auto-csr-approver-29532414-26bx7\" (UID: \"cee4a86b-61ae-481f-9744-0635388f16ab\") " pod="openshift-infra/auto-csr-approver-29532414-26bx7" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.095352 4982 generic.go:334] "Generic (PLEG): container finished" podID="5355d669-4f87-48b6-b389-09f97979f9c6" containerID="036a30027029037d6092d3d4e4928cf1c66a5b79165d02d0ca47fb2dbbfa24be" exitCode=0 Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.095706 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv7hr" event={"ID":"5355d669-4f87-48b6-b389-09f97979f9c6","Type":"ContainerDied","Data":"036a30027029037d6092d3d4e4928cf1c66a5b79165d02d0ca47fb2dbbfa24be"} Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.103390 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.154912 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36f6a63-d48a-4adb-bdb0-3b63c7679981" path="/var/lib/kubelet/pods/b36f6a63-d48a-4adb-bdb0-3b63c7679981/volumes" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.158880 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532414-26bx7" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.182838 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.226475 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.260453 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdb8f\" (UniqueName: \"kubernetes.io/projected/5355d669-4f87-48b6-b389-09f97979f9c6-kube-api-access-jdb8f\") pod \"5355d669-4f87-48b6-b389-09f97979f9c6\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.260623 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-utilities\") pod \"5355d669-4f87-48b6-b389-09f97979f9c6\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.260729 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-catalog-content\") pod \"5355d669-4f87-48b6-b389-09f97979f9c6\" (UID: \"5355d669-4f87-48b6-b389-09f97979f9c6\") " Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.263786 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-utilities" (OuterVolumeSpecName: "utilities") pod "5355d669-4f87-48b6-b389-09f97979f9c6" (UID: "5355d669-4f87-48b6-b389-09f97979f9c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.265674 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5355d669-4f87-48b6-b389-09f97979f9c6-kube-api-access-jdb8f" (OuterVolumeSpecName: "kube-api-access-jdb8f") pod "5355d669-4f87-48b6-b389-09f97979f9c6" (UID: "5355d669-4f87-48b6-b389-09f97979f9c6"). InnerVolumeSpecName "kube-api-access-jdb8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.293860 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.314933 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5355d669-4f87-48b6-b389-09f97979f9c6" (UID: "5355d669-4f87-48b6-b389-09f97979f9c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.362375 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.363065 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5355d669-4f87-48b6-b389-09f97979f9c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.363086 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdb8f\" (UniqueName: \"kubernetes.io/projected/5355d669-4f87-48b6-b389-09f97979f9c6-kube-api-access-jdb8f\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.424352 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.444787 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc"] Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.528094 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.529554 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.584416 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532414-26bx7"] Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.585260 4982 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.604004 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.618029 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.820668 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.839895 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.897850 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.931393 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 14:54:39 crc kubenswrapper[4982]: I0224 14:54:39.965824 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.010390 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.020110 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.059097 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.087148 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.100534 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.104659 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" event={"ID":"652d0a6a-7206-4b47-b9ec-723351b9a056","Type":"ContainerStarted","Data":"6e467aa12b89538cf836074ca6d68b2165db71c2185fca79f0c434e3a28b7c26"} Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.104716 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" event={"ID":"652d0a6a-7206-4b47-b9ec-723351b9a056","Type":"ContainerStarted","Data":"173360c184afbc1d1fe5d554cf2adc94e6e5133317b8a5edb560495f58d81f58"} Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.106049 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532414-26bx7" event={"ID":"cee4a86b-61ae-481f-9744-0635388f16ab","Type":"ContainerStarted","Data":"3fa944468b1673eef0d40b765ca0496452c54925967af52b50a511b1fd8e853e"} Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.108794 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv7hr" event={"ID":"5355d669-4f87-48b6-b389-09f97979f9c6","Type":"ContainerDied","Data":"13aa4bc3b57191613c30ddf03fd1cb467d4a9d9ec0bbd9076d9042fe24b2d7e9"} Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.108855 4982 scope.go:117] "RemoveContainer" containerID="036a30027029037d6092d3d4e4928cf1c66a5b79165d02d0ca47fb2dbbfa24be" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.109132 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv7hr" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.133079 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" podStartSLOduration=3.133054174 podStartE2EDuration="3.133054174s" podCreationTimestamp="2026-02-24 14:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:54:40.130966417 +0000 UTC m=+341.750024940" watchObservedRunningTime="2026-02-24 14:54:40.133054174 +0000 UTC m=+341.752112667" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.140234 4982 scope.go:117] "RemoveContainer" containerID="611e505cbdcc3a2065d1045390023bf768bbb05f732e51f5616e4ab94e9841eb" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.150296 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bv7hr"] Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.157046 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bv7hr"] Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.185207 4982 scope.go:117] "RemoveContainer" containerID="c2d1d35aa1ad6a3ec88ef0e26b0e927ab7bb46c2ae38b05771dafcf39c1a1501" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.199098 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.226076 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.245004 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.298654 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.302739 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.434614 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.516764 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.664430 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.866134 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.965090 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 14:54:40 crc kubenswrapper[4982]: I0224 14:54:40.982418 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.000534 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.062295 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.122680 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532414-26bx7" event={"ID":"cee4a86b-61ae-481f-9744-0635388f16ab","Type":"ContainerStarted","Data":"84fff1ab43822deb16f29349929d6cf2a137167a695be0832998867321f11f60"} Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.122760 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.128220 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5fdb59f4df-mmsvc" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.141325 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.154078 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" path="/var/lib/kubelet/pods/5355d669-4f87-48b6-b389-09f97979f9c6/volumes" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.220952 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.229704 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.286214 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.294547 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.306626 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.331339 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.335408 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.443042 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.464669 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.591051 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.669259 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.787426 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.814655 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.823042 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.877264 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.914744 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 14:54:41 crc kubenswrapper[4982]: I0224 14:54:41.952001 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.007918 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.131066 4982 generic.go:334] "Generic (PLEG): container finished" podID="cee4a86b-61ae-481f-9744-0635388f16ab" containerID="84fff1ab43822deb16f29349929d6cf2a137167a695be0832998867321f11f60" exitCode=0 Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.131199 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532414-26bx7" event={"ID":"cee4a86b-61ae-481f-9744-0635388f16ab","Type":"ContainerDied","Data":"84fff1ab43822deb16f29349929d6cf2a137167a695be0832998867321f11f60"} Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.138244 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.227022 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.270587 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.279810 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.289269 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.499378 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532414-26bx7" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.567948 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.637876 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzfdv\" (UniqueName: \"kubernetes.io/projected/cee4a86b-61ae-481f-9744-0635388f16ab-kube-api-access-jzfdv\") pod \"cee4a86b-61ae-481f-9744-0635388f16ab\" (UID: \"cee4a86b-61ae-481f-9744-0635388f16ab\") " Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.651808 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee4a86b-61ae-481f-9744-0635388f16ab-kube-api-access-jzfdv" (OuterVolumeSpecName: "kube-api-access-jzfdv") pod "cee4a86b-61ae-481f-9744-0635388f16ab" (UID: "cee4a86b-61ae-481f-9744-0635388f16ab"). InnerVolumeSpecName "kube-api-access-jzfdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.740120 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzfdv\" (UniqueName: \"kubernetes.io/projected/cee4a86b-61ae-481f-9744-0635388f16ab-kube-api-access-jzfdv\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.966794 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 14:54:42 crc kubenswrapper[4982]: I0224 14:54:42.972489 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.072878 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.143694 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532414-26bx7" Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.143689 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532414-26bx7" event={"ID":"cee4a86b-61ae-481f-9744-0635388f16ab","Type":"ContainerDied","Data":"3fa944468b1673eef0d40b765ca0496452c54925967af52b50a511b1fd8e853e"} Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.143777 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa944468b1673eef0d40b765ca0496452c54925967af52b50a511b1fd8e853e" Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.319077 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.321604 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.436669 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.774858 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 14:54:43 crc kubenswrapper[4982]: I0224 14:54:43.816343 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 14:54:44 crc kubenswrapper[4982]: I0224 14:54:44.491401 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 14:54:44 crc kubenswrapper[4982]: I0224 14:54:44.692022 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 14:54:45 crc kubenswrapper[4982]: I0224 14:54:45.216275 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 14:54:45 crc kubenswrapper[4982]: I0224 14:54:45.305766 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 14:54:48 crc kubenswrapper[4982]: I0224 14:54:48.276110 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 14:54:48 crc kubenswrapper[4982]: I0224 14:54:48.832047 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj"] Feb 24 14:54:48 crc kubenswrapper[4982]: I0224 14:54:48.832366 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" podUID="1d5324b1-ddf4-4216-9d76-43d60b13cc06" containerName="controller-manager" containerID="cri-o://1c05b015e6ffa5caec7cd4cb582c11a07c6afe13785b82e651b8d694617d7020" gracePeriod=30 Feb 24 14:54:48 crc kubenswrapper[4982]: I0224 14:54:48.923198 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx"] Feb 24 14:54:48 crc kubenswrapper[4982]: I0224 14:54:48.923541 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" podUID="97c93757-cb7f-4278-8bfb-5fa3a5ebd512" containerName="route-controller-manager" containerID="cri-o://aa69eeac4d4030f6af5e506a3e857dd58e196a57ff696fce93e14e29abdb1b40" gracePeriod=30 Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.191014 4982 generic.go:334] "Generic (PLEG): container finished" podID="1d5324b1-ddf4-4216-9d76-43d60b13cc06" containerID="1c05b015e6ffa5caec7cd4cb582c11a07c6afe13785b82e651b8d694617d7020" exitCode=0 Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.191050 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" event={"ID":"1d5324b1-ddf4-4216-9d76-43d60b13cc06","Type":"ContainerDied","Data":"1c05b015e6ffa5caec7cd4cb582c11a07c6afe13785b82e651b8d694617d7020"} Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.193875 4982 generic.go:334] "Generic (PLEG): container finished" podID="97c93757-cb7f-4278-8bfb-5fa3a5ebd512" containerID="aa69eeac4d4030f6af5e506a3e857dd58e196a57ff696fce93e14e29abdb1b40" exitCode=0 Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.193906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" event={"ID":"97c93757-cb7f-4278-8bfb-5fa3a5ebd512","Type":"ContainerDied","Data":"aa69eeac4d4030f6af5e506a3e857dd58e196a57ff696fce93e14e29abdb1b40"} Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.272906 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.338638 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-client-ca\") pod \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.338775 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-proxy-ca-bundles\") pod \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.338862 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5324b1-ddf4-4216-9d76-43d60b13cc06-serving-cert\") pod \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.338917 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr5pq\" (UniqueName: \"kubernetes.io/projected/1d5324b1-ddf4-4216-9d76-43d60b13cc06-kube-api-access-jr5pq\") pod \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.339172 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-config\") pod \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\" (UID: \"1d5324b1-ddf4-4216-9d76-43d60b13cc06\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.339616 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d5324b1-ddf4-4216-9d76-43d60b13cc06" (UID: "1d5324b1-ddf4-4216-9d76-43d60b13cc06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.339736 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1d5324b1-ddf4-4216-9d76-43d60b13cc06" (UID: "1d5324b1-ddf4-4216-9d76-43d60b13cc06"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.340065 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-config" (OuterVolumeSpecName: "config") pod "1d5324b1-ddf4-4216-9d76-43d60b13cc06" (UID: "1d5324b1-ddf4-4216-9d76-43d60b13cc06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.340091 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.340110 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.346361 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5324b1-ddf4-4216-9d76-43d60b13cc06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d5324b1-ddf4-4216-9d76-43d60b13cc06" (UID: "1d5324b1-ddf4-4216-9d76-43d60b13cc06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.348063 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.348682 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5324b1-ddf4-4216-9d76-43d60b13cc06-kube-api-access-jr5pq" (OuterVolumeSpecName: "kube-api-access-jr5pq") pod "1d5324b1-ddf4-4216-9d76-43d60b13cc06" (UID: "1d5324b1-ddf4-4216-9d76-43d60b13cc06"). InnerVolumeSpecName "kube-api-access-jr5pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.441094 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-config\") pod \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.441168 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-client-ca\") pod \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.441323 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-serving-cert\") pod \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.441407 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fszn\" (UniqueName: \"kubernetes.io/projected/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-kube-api-access-5fszn\") pod \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\" (UID: \"97c93757-cb7f-4278-8bfb-5fa3a5ebd512\") " Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.441767 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5324b1-ddf4-4216-9d76-43d60b13cc06-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.441786 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d5324b1-ddf4-4216-9d76-43d60b13cc06-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.441795 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr5pq\" (UniqueName: \"kubernetes.io/projected/1d5324b1-ddf4-4216-9d76-43d60b13cc06-kube-api-access-jr5pq\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.443074 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-config" (OuterVolumeSpecName: "config") pod "97c93757-cb7f-4278-8bfb-5fa3a5ebd512" (UID: "97c93757-cb7f-4278-8bfb-5fa3a5ebd512"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.443073 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-client-ca" (OuterVolumeSpecName: "client-ca") pod "97c93757-cb7f-4278-8bfb-5fa3a5ebd512" (UID: "97c93757-cb7f-4278-8bfb-5fa3a5ebd512"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.445031 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-kube-api-access-5fszn" (OuterVolumeSpecName: "kube-api-access-5fszn") pod "97c93757-cb7f-4278-8bfb-5fa3a5ebd512" (UID: "97c93757-cb7f-4278-8bfb-5fa3a5ebd512"). InnerVolumeSpecName "kube-api-access-5fszn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.449176 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97c93757-cb7f-4278-8bfb-5fa3a5ebd512" (UID: "97c93757-cb7f-4278-8bfb-5fa3a5ebd512"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.543630 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.543679 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fszn\" (UniqueName: \"kubernetes.io/projected/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-kube-api-access-5fszn\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.543699 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.543716 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97c93757-cb7f-4278-8bfb-5fa3a5ebd512-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.720000 4982 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 14:54:49 crc kubenswrapper[4982]: I0224 14:54:49.720308 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0c9bf3e712ada87a63ef2870877b499c03076930e9d7a74c367119890d3f7854" gracePeriod=5 Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.068579 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-77g9h"] Feb 24 14:54:50 crc kubenswrapper[4982]: E0224 14:54:50.069441 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069465 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 14:54:50 crc kubenswrapper[4982]: E0224 14:54:50.069494 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" containerName="registry-server" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069536 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" containerName="registry-server" Feb 24 14:54:50 crc kubenswrapper[4982]: E0224 14:54:50.069558 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" containerName="extract-utilities" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069571 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" containerName="extract-utilities" Feb 24 14:54:50 crc kubenswrapper[4982]: E0224 14:54:50.069589 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c93757-cb7f-4278-8bfb-5fa3a5ebd512" containerName="route-controller-manager" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069601 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c93757-cb7f-4278-8bfb-5fa3a5ebd512" containerName="route-controller-manager" Feb 24 14:54:50 crc kubenswrapper[4982]: E0224 14:54:50.069624 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee4a86b-61ae-481f-9744-0635388f16ab" containerName="oc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069637 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee4a86b-61ae-481f-9744-0635388f16ab" containerName="oc" Feb 24 14:54:50 crc kubenswrapper[4982]: E0224 14:54:50.069652 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5324b1-ddf4-4216-9d76-43d60b13cc06" containerName="controller-manager" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069667 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5324b1-ddf4-4216-9d76-43d60b13cc06" containerName="controller-manager" Feb 24 14:54:50 crc kubenswrapper[4982]: E0224 14:54:50.069684 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" containerName="extract-content" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069697 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" containerName="extract-content" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069937 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5324b1-ddf4-4216-9d76-43d60b13cc06" containerName="controller-manager" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069956 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069971 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee4a86b-61ae-481f-9744-0635388f16ab" containerName="oc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.069987 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5355d669-4f87-48b6-b389-09f97979f9c6" containerName="registry-server" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.070008 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c93757-cb7f-4278-8bfb-5fa3a5ebd512" containerName="route-controller-manager" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.070612 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.071345 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc"] Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.071973 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.085805 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-77g9h"] Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.096031 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc"] Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154142 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-config\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154215 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-config\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154241 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z89p\" (UniqueName: \"kubernetes.io/projected/ec326567-3a3d-454d-828b-9f937de2a823-kube-api-access-8z89p\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154274 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/520142f9-d044-4e76-a811-9ae2d21bdd0e-serving-cert\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154302 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-client-ca\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154317 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-client-ca\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154336 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec326567-3a3d-454d-828b-9f937de2a823-serving-cert\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154365 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-proxy-ca-bundles\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.154397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42q2z\" (UniqueName: \"kubernetes.io/projected/520142f9-d044-4e76-a811-9ae2d21bdd0e-kube-api-access-42q2z\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.204421 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" event={"ID":"1d5324b1-ddf4-4216-9d76-43d60b13cc06","Type":"ContainerDied","Data":"11365907fc6069f11b6d491abbba21d9135ceddbb9331284201a5b9775da97d4"} Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.204493 4982 scope.go:117] "RemoveContainer" containerID="1c05b015e6ffa5caec7cd4cb582c11a07c6afe13785b82e651b8d694617d7020" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.204680 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.221070 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" event={"ID":"97c93757-cb7f-4278-8bfb-5fa3a5ebd512","Type":"ContainerDied","Data":"27d71489e44e8ea761bd8fd15d795d53cd3f00651e88d5a08d3a54322d559a51"} Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.221195 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.255906 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj"] Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.256412 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-client-ca\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.256453 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-client-ca\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.256486 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec326567-3a3d-454d-828b-9f937de2a823-serving-cert\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.258378 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-proxy-ca-bundles\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.258451 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42q2z\" (UniqueName: \"kubernetes.io/projected/520142f9-d044-4e76-a811-9ae2d21bdd0e-kube-api-access-42q2z\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.258566 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-config\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.258613 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-config\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.258636 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z89p\" (UniqueName: \"kubernetes.io/projected/ec326567-3a3d-454d-828b-9f937de2a823-kube-api-access-8z89p\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.258697 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/520142f9-d044-4e76-a811-9ae2d21bdd0e-serving-cert\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.259679 4982 scope.go:117] "RemoveContainer" containerID="aa69eeac4d4030f6af5e506a3e857dd58e196a57ff696fce93e14e29abdb1b40" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.259917 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-proxy-ca-bundles\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.262793 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-client-ca\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.264337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-config\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.266824 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/520142f9-d044-4e76-a811-9ae2d21bdd0e-serving-cert\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.271823 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-config\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.272226 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec326567-3a3d-454d-828b-9f937de2a823-serving-cert\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.272941 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-client-ca\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.288019 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z89p\" (UniqueName: \"kubernetes.io/projected/ec326567-3a3d-454d-828b-9f937de2a823-kube-api-access-8z89p\") pod \"controller-manager-69fc857fc8-77g9h\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.290908 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42q2z\" (UniqueName: \"kubernetes.io/projected/520142f9-d044-4e76-a811-9ae2d21bdd0e-kube-api-access-42q2z\") pod \"route-controller-manager-7c86799fb7-45drc\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.303548 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c967dc9f7-gqfvj"] Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.311134 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx"] Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.314366 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55fbb658d-jgzvx"] Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.398455 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.413554 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.692242 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc"] Feb 24 14:54:50 crc kubenswrapper[4982]: W0224 14:54:50.696556 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod520142f9_d044_4e76_a811_9ae2d21bdd0e.slice/crio-4629403a0bd23fc257816052b4d4f65970a95420a86bd0e85fc9ce8d14e76c22 WatchSource:0}: Error finding container 4629403a0bd23fc257816052b4d4f65970a95420a86bd0e85fc9ce8d14e76c22: Status 404 returned error can't find the container with id 4629403a0bd23fc257816052b4d4f65970a95420a86bd0e85fc9ce8d14e76c22 Feb 24 14:54:50 crc kubenswrapper[4982]: I0224 14:54:50.851128 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-77g9h"] Feb 24 14:54:50 crc kubenswrapper[4982]: W0224 14:54:50.860282 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec326567_3a3d_454d_828b_9f937de2a823.slice/crio-18904a948dfc500297cc8347719534494f9f53b8675f0f763a0c1cadc6ddf0ec WatchSource:0}: Error finding container 18904a948dfc500297cc8347719534494f9f53b8675f0f763a0c1cadc6ddf0ec: Status 404 returned error can't find the container with id 18904a948dfc500297cc8347719534494f9f53b8675f0f763a0c1cadc6ddf0ec Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.155532 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d5324b1-ddf4-4216-9d76-43d60b13cc06" path="/var/lib/kubelet/pods/1d5324b1-ddf4-4216-9d76-43d60b13cc06/volumes" Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.156949 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c93757-cb7f-4278-8bfb-5fa3a5ebd512" path="/var/lib/kubelet/pods/97c93757-cb7f-4278-8bfb-5fa3a5ebd512/volumes" Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.229060 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" event={"ID":"520142f9-d044-4e76-a811-9ae2d21bdd0e","Type":"ContainerStarted","Data":"a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66"} Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.229131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" event={"ID":"520142f9-d044-4e76-a811-9ae2d21bdd0e","Type":"ContainerStarted","Data":"4629403a0bd23fc257816052b4d4f65970a95420a86bd0e85fc9ce8d14e76c22"} Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.231675 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.237456 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" event={"ID":"ec326567-3a3d-454d-828b-9f937de2a823","Type":"ContainerStarted","Data":"fc87d936940b75d1a4de5736e576ecf64d45013497721dda55f97203c87d34b7"} Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.237530 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" event={"ID":"ec326567-3a3d-454d-828b-9f937de2a823","Type":"ContainerStarted","Data":"18904a948dfc500297cc8347719534494f9f53b8675f0f763a0c1cadc6ddf0ec"} Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.237826 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.245272 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.341076 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" podStartSLOduration=3.341059182 podStartE2EDuration="3.341059182s" podCreationTimestamp="2026-02-24 14:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:54:51.335856627 +0000 UTC m=+352.954915120" watchObservedRunningTime="2026-02-24 14:54:51.341059182 +0000 UTC m=+352.960117675" Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.341188 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" podStartSLOduration=3.341183275 podStartE2EDuration="3.341183275s" podCreationTimestamp="2026-02-24 14:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:54:51.289316713 +0000 UTC m=+352.908375206" watchObservedRunningTime="2026-02-24 14:54:51.341183275 +0000 UTC m=+352.960241768" Feb 24 14:54:51 crc kubenswrapper[4982]: I0224 14:54:51.424862 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.270172 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.270807 4982 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0c9bf3e712ada87a63ef2870877b499c03076930e9d7a74c367119890d3f7854" exitCode=137 Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.270849 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c7244434c44b3a4295ef1cee834f734383adab2ca84b3c3f95f58f2c94c319" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.291461 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.291570 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.335968 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.336042 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.336076 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.336099 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.336162 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.356826 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.356867 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.356888 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.356913 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.362807 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.437917 4982 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.437949 4982 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.437960 4982 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.437968 4982 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:55 crc kubenswrapper[4982]: I0224 14:54:55.437976 4982 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 14:54:56 crc kubenswrapper[4982]: I0224 14:54:56.278572 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 14:54:57 crc kubenswrapper[4982]: I0224 14:54:57.159164 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 24 14:54:57 crc kubenswrapper[4982]: I0224 14:54:57.159986 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 24 14:54:57 crc kubenswrapper[4982]: I0224 14:54:57.175455 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 14:54:57 crc kubenswrapper[4982]: I0224 14:54:57.175521 4982 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c2f681bc-15f1-49f4-9895-3990e6c2966e" Feb 24 14:54:57 crc kubenswrapper[4982]: I0224 14:54:57.175541 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 14:54:57 crc kubenswrapper[4982]: I0224 14:54:57.175549 4982 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c2f681bc-15f1-49f4-9895-3990e6c2966e" Feb 24 14:55:08 crc kubenswrapper[4982]: I0224 14:55:08.800131 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-77g9h"] Feb 24 14:55:08 crc kubenswrapper[4982]: I0224 14:55:08.801052 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" podUID="ec326567-3a3d-454d-828b-9f937de2a823" containerName="controller-manager" containerID="cri-o://fc87d936940b75d1a4de5736e576ecf64d45013497721dda55f97203c87d34b7" gracePeriod=30 Feb 24 14:55:08 crc kubenswrapper[4982]: I0224 14:55:08.828296 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc"] Feb 24 14:55:08 crc kubenswrapper[4982]: I0224 14:55:08.828595 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" podUID="520142f9-d044-4e76-a811-9ae2d21bdd0e" containerName="route-controller-manager" containerID="cri-o://a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66" gracePeriod=30 Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.365472 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.374269 4982 generic.go:334] "Generic (PLEG): container finished" podID="ec326567-3a3d-454d-828b-9f937de2a823" containerID="fc87d936940b75d1a4de5736e576ecf64d45013497721dda55f97203c87d34b7" exitCode=0 Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.374388 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" event={"ID":"ec326567-3a3d-454d-828b-9f937de2a823","Type":"ContainerDied","Data":"fc87d936940b75d1a4de5736e576ecf64d45013497721dda55f97203c87d34b7"} Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.380796 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" event={"ID":"520142f9-d044-4e76-a811-9ae2d21bdd0e","Type":"ContainerDied","Data":"a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66"} Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.380836 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.380860 4982 scope.go:117] "RemoveContainer" containerID="a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.381074 4982 generic.go:334] "Generic (PLEG): container finished" podID="520142f9-d044-4e76-a811-9ae2d21bdd0e" containerID="a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66" exitCode=0 Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.381113 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc" event={"ID":"520142f9-d044-4e76-a811-9ae2d21bdd0e","Type":"ContainerDied","Data":"4629403a0bd23fc257816052b4d4f65970a95420a86bd0e85fc9ce8d14e76c22"} Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.423732 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.431199 4982 scope.go:117] "RemoveContainer" containerID="a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.431879 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-config\") pod \"520142f9-d044-4e76-a811-9ae2d21bdd0e\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.431937 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-client-ca\") pod \"520142f9-d044-4e76-a811-9ae2d21bdd0e\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.431986 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42q2z\" (UniqueName: \"kubernetes.io/projected/520142f9-d044-4e76-a811-9ae2d21bdd0e-kube-api-access-42q2z\") pod \"520142f9-d044-4e76-a811-9ae2d21bdd0e\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.432016 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/520142f9-d044-4e76-a811-9ae2d21bdd0e-serving-cert\") pod \"520142f9-d044-4e76-a811-9ae2d21bdd0e\" (UID: \"520142f9-d044-4e76-a811-9ae2d21bdd0e\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.433135 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-config" (OuterVolumeSpecName: "config") pod "520142f9-d044-4e76-a811-9ae2d21bdd0e" (UID: "520142f9-d044-4e76-a811-9ae2d21bdd0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: E0224 14:55:09.433146 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66\": container with ID starting with a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66 not found: ID does not exist" containerID="a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.433191 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66"} err="failed to get container status \"a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66\": rpc error: code = NotFound desc = could not find container \"a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66\": container with ID starting with a062041012205d9562087e2bd22a3c7ae15a8e74e93b52ed9d1513fbd4beeb66 not found: ID does not exist" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.433683 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "520142f9-d044-4e76-a811-9ae2d21bdd0e" (UID: "520142f9-d044-4e76-a811-9ae2d21bdd0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.439089 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520142f9-d044-4e76-a811-9ae2d21bdd0e-kube-api-access-42q2z" (OuterVolumeSpecName: "kube-api-access-42q2z") pod "520142f9-d044-4e76-a811-9ae2d21bdd0e" (UID: "520142f9-d044-4e76-a811-9ae2d21bdd0e"). InnerVolumeSpecName "kube-api-access-42q2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.440584 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/520142f9-d044-4e76-a811-9ae2d21bdd0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "520142f9-d044-4e76-a811-9ae2d21bdd0e" (UID: "520142f9-d044-4e76-a811-9ae2d21bdd0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533435 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z89p\" (UniqueName: \"kubernetes.io/projected/ec326567-3a3d-454d-828b-9f937de2a823-kube-api-access-8z89p\") pod \"ec326567-3a3d-454d-828b-9f937de2a823\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533483 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec326567-3a3d-454d-828b-9f937de2a823-serving-cert\") pod \"ec326567-3a3d-454d-828b-9f937de2a823\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533544 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-client-ca\") pod \"ec326567-3a3d-454d-828b-9f937de2a823\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533590 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-config\") pod \"ec326567-3a3d-454d-828b-9f937de2a823\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533627 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-proxy-ca-bundles\") pod \"ec326567-3a3d-454d-828b-9f937de2a823\" (UID: \"ec326567-3a3d-454d-828b-9f937de2a823\") " Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533852 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533865 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/520142f9-d044-4e76-a811-9ae2d21bdd0e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533874 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42q2z\" (UniqueName: \"kubernetes.io/projected/520142f9-d044-4e76-a811-9ae2d21bdd0e-kube-api-access-42q2z\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.533883 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/520142f9-d044-4e76-a811-9ae2d21bdd0e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.534792 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-config" (OuterVolumeSpecName: "config") pod "ec326567-3a3d-454d-828b-9f937de2a823" (UID: "ec326567-3a3d-454d-828b-9f937de2a823"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.534849 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec326567-3a3d-454d-828b-9f937de2a823" (UID: "ec326567-3a3d-454d-828b-9f937de2a823"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.534954 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ec326567-3a3d-454d-828b-9f937de2a823" (UID: "ec326567-3a3d-454d-828b-9f937de2a823"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.537680 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec326567-3a3d-454d-828b-9f937de2a823-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec326567-3a3d-454d-828b-9f937de2a823" (UID: "ec326567-3a3d-454d-828b-9f937de2a823"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.540460 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec326567-3a3d-454d-828b-9f937de2a823-kube-api-access-8z89p" (OuterVolumeSpecName: "kube-api-access-8z89p") pod "ec326567-3a3d-454d-828b-9f937de2a823" (UID: "ec326567-3a3d-454d-828b-9f937de2a823"). InnerVolumeSpecName "kube-api-access-8z89p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.635205 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.635256 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.635275 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec326567-3a3d-454d-828b-9f937de2a823-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.635293 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z89p\" (UniqueName: \"kubernetes.io/projected/ec326567-3a3d-454d-828b-9f937de2a823-kube-api-access-8z89p\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.635312 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec326567-3a3d-454d-828b-9f937de2a823-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.726791 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc"] Feb 24 14:55:09 crc kubenswrapper[4982]: I0224 14:55:09.729554 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-45drc"] Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.079304 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs"] Feb 24 14:55:10 crc kubenswrapper[4982]: E0224 14:55:10.079821 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520142f9-d044-4e76-a811-9ae2d21bdd0e" containerName="route-controller-manager" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.079851 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="520142f9-d044-4e76-a811-9ae2d21bdd0e" containerName="route-controller-manager" Feb 24 14:55:10 crc kubenswrapper[4982]: E0224 14:55:10.079904 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec326567-3a3d-454d-828b-9f937de2a823" containerName="controller-manager" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.079923 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec326567-3a3d-454d-828b-9f937de2a823" containerName="controller-manager" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.080156 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec326567-3a3d-454d-828b-9f937de2a823" containerName="controller-manager" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.080195 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="520142f9-d044-4e76-a811-9ae2d21bdd0e" containerName="route-controller-manager" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.082350 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.089181 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.089834 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.090157 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.090635 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.090852 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.096362 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.113631 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7ff5bf444c-h46bd"] Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.115359 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.120036 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs"] Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.151029 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-config\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.151195 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910b9372-e222-4848-9e2a-472d37444c81-serving-cert\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.151215 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-client-ca\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.151241 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4dxk\" (UniqueName: \"kubernetes.io/projected/910b9372-e222-4848-9e2a-472d37444c81-kube-api-access-g4dxk\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.159421 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff5bf444c-h46bd"] Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252290 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4c0609-f755-4f29-8938-441d8f24a989-serving-cert\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252328 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-config\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252363 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdhxv\" (UniqueName: \"kubernetes.io/projected/7f4c0609-f755-4f29-8938-441d8f24a989-kube-api-access-qdhxv\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252460 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-client-ca\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252541 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910b9372-e222-4848-9e2a-472d37444c81-serving-cert\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252565 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-client-ca\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252592 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4dxk\" (UniqueName: \"kubernetes.io/projected/910b9372-e222-4848-9e2a-472d37444c81-kube-api-access-g4dxk\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252641 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-config\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.252665 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-proxy-ca-bundles\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.253705 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-client-ca\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.253985 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-config\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.256622 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910b9372-e222-4848-9e2a-472d37444c81-serving-cert\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.270069 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4dxk\" (UniqueName: \"kubernetes.io/projected/910b9372-e222-4848-9e2a-472d37444c81-kube-api-access-g4dxk\") pod \"route-controller-manager-86c679cff5-rftrs\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.354200 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-client-ca\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.354277 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-proxy-ca-bundles\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.354310 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4c0609-f755-4f29-8938-441d8f24a989-serving-cert\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.354325 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-config\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.354354 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdhxv\" (UniqueName: \"kubernetes.io/projected/7f4c0609-f755-4f29-8938-441d8f24a989-kube-api-access-qdhxv\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.355836 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-proxy-ca-bundles\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.356772 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-config\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.356811 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-client-ca\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.360043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4c0609-f755-4f29-8938-441d8f24a989-serving-cert\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.380319 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdhxv\" (UniqueName: \"kubernetes.io/projected/7f4c0609-f755-4f29-8938-441d8f24a989-kube-api-access-qdhxv\") pod \"controller-manager-7ff5bf444c-h46bd\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.390209 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" event={"ID":"ec326567-3a3d-454d-828b-9f937de2a823","Type":"ContainerDied","Data":"18904a948dfc500297cc8347719534494f9f53b8675f0f763a0c1cadc6ddf0ec"} Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.390312 4982 scope.go:117] "RemoveContainer" containerID="fc87d936940b75d1a4de5736e576ecf64d45013497721dda55f97203c87d34b7" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.390729 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc857fc8-77g9h" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.430594 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-77g9h"] Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.435187 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-77g9h"] Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.475673 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.483219 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.907482 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs"] Feb 24 14:55:10 crc kubenswrapper[4982]: I0224 14:55:10.970194 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7ff5bf444c-h46bd"] Feb 24 14:55:10 crc kubenswrapper[4982]: W0224 14:55:10.980448 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f4c0609_f755_4f29_8938_441d8f24a989.slice/crio-35bb8506ae938fd8f0fbc9e3bd2af4731ff29debfcdac19a101ec8a1335c8009 WatchSource:0}: Error finding container 35bb8506ae938fd8f0fbc9e3bd2af4731ff29debfcdac19a101ec8a1335c8009: Status 404 returned error can't find the container with id 35bb8506ae938fd8f0fbc9e3bd2af4731ff29debfcdac19a101ec8a1335c8009 Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.154175 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="520142f9-d044-4e76-a811-9ae2d21bdd0e" path="/var/lib/kubelet/pods/520142f9-d044-4e76-a811-9ae2d21bdd0e/volumes" Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.155328 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec326567-3a3d-454d-828b-9f937de2a823" path="/var/lib/kubelet/pods/ec326567-3a3d-454d-828b-9f937de2a823/volumes" Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.399906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" event={"ID":"910b9372-e222-4848-9e2a-472d37444c81","Type":"ContainerStarted","Data":"127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e"} Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.399964 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.399975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" event={"ID":"910b9372-e222-4848-9e2a-472d37444c81","Type":"ContainerStarted","Data":"2b5007c8a7d756238ed6b2f08379e35432a4f9983f4907590f7b7307fa4e5d0c"} Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.402962 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" event={"ID":"7f4c0609-f755-4f29-8938-441d8f24a989","Type":"ContainerStarted","Data":"8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c"} Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.403000 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" event={"ID":"7f4c0609-f755-4f29-8938-441d8f24a989","Type":"ContainerStarted","Data":"35bb8506ae938fd8f0fbc9e3bd2af4731ff29debfcdac19a101ec8a1335c8009"} Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.403145 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.420100 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" podStartSLOduration=3.420081324 podStartE2EDuration="3.420081324s" podCreationTimestamp="2026-02-24 14:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:55:11.418011038 +0000 UTC m=+373.037069531" watchObservedRunningTime="2026-02-24 14:55:11.420081324 +0000 UTC m=+373.039139817" Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.422607 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.441288 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" podStartSLOduration=3.441270949 podStartE2EDuration="3.441270949s" podCreationTimestamp="2026-02-24 14:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:55:11.438858915 +0000 UTC m=+373.057917408" watchObservedRunningTime="2026-02-24 14:55:11.441270949 +0000 UTC m=+373.060329442" Feb 24 14:55:11 crc kubenswrapper[4982]: I0224 14:55:11.684085 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:20 crc kubenswrapper[4982]: I0224 14:55:20.960125 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrnhb"] Feb 24 14:55:20 crc kubenswrapper[4982]: I0224 14:55:20.961400 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qrnhb" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerName="registry-server" containerID="cri-o://1d548731d1e41576669d368040708737f8bb6455b73b6ddbd923cab81b62af28" gracePeriod=2 Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.511419 4982 generic.go:334] "Generic (PLEG): container finished" podID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerID="1d548731d1e41576669d368040708737f8bb6455b73b6ddbd923cab81b62af28" exitCode=0 Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.511533 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrnhb" event={"ID":"ef70a241-8f38-4315-a1c9-a6df74030a41","Type":"ContainerDied","Data":"1d548731d1e41576669d368040708737f8bb6455b73b6ddbd923cab81b62af28"} Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.574163 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.741955 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87l2z\" (UniqueName: \"kubernetes.io/projected/ef70a241-8f38-4315-a1c9-a6df74030a41-kube-api-access-87l2z\") pod \"ef70a241-8f38-4315-a1c9-a6df74030a41\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.742053 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-utilities\") pod \"ef70a241-8f38-4315-a1c9-a6df74030a41\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.742177 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-catalog-content\") pod \"ef70a241-8f38-4315-a1c9-a6df74030a41\" (UID: \"ef70a241-8f38-4315-a1c9-a6df74030a41\") " Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.743457 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-utilities" (OuterVolumeSpecName: "utilities") pod "ef70a241-8f38-4315-a1c9-a6df74030a41" (UID: "ef70a241-8f38-4315-a1c9-a6df74030a41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.749266 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef70a241-8f38-4315-a1c9-a6df74030a41-kube-api-access-87l2z" (OuterVolumeSpecName: "kube-api-access-87l2z") pod "ef70a241-8f38-4315-a1c9-a6df74030a41" (UID: "ef70a241-8f38-4315-a1c9-a6df74030a41"). InnerVolumeSpecName "kube-api-access-87l2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.811930 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef70a241-8f38-4315-a1c9-a6df74030a41" (UID: "ef70a241-8f38-4315-a1c9-a6df74030a41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.844403 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87l2z\" (UniqueName: \"kubernetes.io/projected/ef70a241-8f38-4315-a1c9-a6df74030a41-kube-api-access-87l2z\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.844436 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:21 crc kubenswrapper[4982]: I0224 14:55:21.844449 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef70a241-8f38-4315-a1c9-a6df74030a41-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:22 crc kubenswrapper[4982]: I0224 14:55:22.524242 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrnhb" event={"ID":"ef70a241-8f38-4315-a1c9-a6df74030a41","Type":"ContainerDied","Data":"1f56214747703070887fde2eaf4f896ce332b5094398c4eef46acca4662ef93f"} Feb 24 14:55:22 crc kubenswrapper[4982]: I0224 14:55:22.524331 4982 scope.go:117] "RemoveContainer" containerID="1d548731d1e41576669d368040708737f8bb6455b73b6ddbd923cab81b62af28" Feb 24 14:55:22 crc kubenswrapper[4982]: I0224 14:55:22.524386 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrnhb" Feb 24 14:55:22 crc kubenswrapper[4982]: I0224 14:55:22.555001 4982 scope.go:117] "RemoveContainer" containerID="fd15ec9eea1e1b9847ca155db20f6ff4d652d2b13fa6c9ca292ba5175e27e319" Feb 24 14:55:22 crc kubenswrapper[4982]: I0224 14:55:22.587684 4982 scope.go:117] "RemoveContainer" containerID="99d907b308c1e59eb7463e404051762158bc0cca5a15b75c6c8e922179f7b54f" Feb 24 14:55:22 crc kubenswrapper[4982]: I0224 14:55:22.590577 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrnhb"] Feb 24 14:55:22 crc kubenswrapper[4982]: I0224 14:55:22.600401 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qrnhb"] Feb 24 14:55:23 crc kubenswrapper[4982]: I0224 14:55:23.154422 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" path="/var/lib/kubelet/pods/ef70a241-8f38-4315-a1c9-a6df74030a41/volumes" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.233517 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qqqvv"] Feb 24 14:55:27 crc kubenswrapper[4982]: E0224 14:55:27.234418 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerName="extract-content" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.234435 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerName="extract-content" Feb 24 14:55:27 crc kubenswrapper[4982]: E0224 14:55:27.234452 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerName="registry-server" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.234459 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerName="registry-server" Feb 24 14:55:27 crc kubenswrapper[4982]: E0224 14:55:27.234469 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerName="extract-utilities" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.234476 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerName="extract-utilities" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.234630 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef70a241-8f38-4315-a1c9-a6df74030a41" containerName="registry-server" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.235069 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.247116 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qqqvv"] Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.320058 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a14351a-837d-455a-8435-a166eb7997b7-trusted-ca\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.320107 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-registry-tls\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.320153 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-bound-sa-token\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.320176 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0a14351a-837d-455a-8435-a166eb7997b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.320210 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0a14351a-837d-455a-8435-a166eb7997b7-registry-certificates\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.320248 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljbfw\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-kube-api-access-ljbfw\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.320303 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0a14351a-837d-455a-8435-a166eb7997b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.320372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.337167 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.421486 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-registry-tls\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.421545 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-bound-sa-token\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.421572 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0a14351a-837d-455a-8435-a166eb7997b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.421594 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0a14351a-837d-455a-8435-a166eb7997b7-registry-certificates\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.421627 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljbfw\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-kube-api-access-ljbfw\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.421666 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0a14351a-837d-455a-8435-a166eb7997b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.421722 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a14351a-837d-455a-8435-a166eb7997b7-trusted-ca\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.422183 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0a14351a-837d-455a-8435-a166eb7997b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.423091 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a14351a-837d-455a-8435-a166eb7997b7-trusted-ca\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.424209 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0a14351a-837d-455a-8435-a166eb7997b7-registry-certificates\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.427586 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0a14351a-837d-455a-8435-a166eb7997b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.429113 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-registry-tls\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.436820 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljbfw\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-kube-api-access-ljbfw\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.437257 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0a14351a-837d-455a-8435-a166eb7997b7-bound-sa-token\") pod \"image-registry-66df7c8f76-qqqvv\" (UID: \"0a14351a-837d-455a-8435-a166eb7997b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.555730 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:27 crc kubenswrapper[4982]: W0224 14:55:27.982429 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a14351a_837d_455a_8435_a166eb7997b7.slice/crio-a1f4fbfe3c20c95c1a18a7964c794bab6f0263489bbe7a3affa3ab3d18eca0a3 WatchSource:0}: Error finding container a1f4fbfe3c20c95c1a18a7964c794bab6f0263489bbe7a3affa3ab3d18eca0a3: Status 404 returned error can't find the container with id a1f4fbfe3c20c95c1a18a7964c794bab6f0263489bbe7a3affa3ab3d18eca0a3 Feb 24 14:55:27 crc kubenswrapper[4982]: I0224 14:55:27.983415 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qqqvv"] Feb 24 14:55:28 crc kubenswrapper[4982]: I0224 14:55:28.567418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" event={"ID":"0a14351a-837d-455a-8435-a166eb7997b7","Type":"ContainerStarted","Data":"e512415d84d42590df86f5474011f0207d84f23a0f5aef17844029a11fea95ce"} Feb 24 14:55:28 crc kubenswrapper[4982]: I0224 14:55:28.567907 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:28 crc kubenswrapper[4982]: I0224 14:55:28.567939 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" event={"ID":"0a14351a-837d-455a-8435-a166eb7997b7","Type":"ContainerStarted","Data":"a1f4fbfe3c20c95c1a18a7964c794bab6f0263489bbe7a3affa3ab3d18eca0a3"} Feb 24 14:55:28 crc kubenswrapper[4982]: I0224 14:55:28.610371 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" podStartSLOduration=1.610344985 podStartE2EDuration="1.610344985s" podCreationTimestamp="2026-02-24 14:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:55:28.601574511 +0000 UTC m=+390.220633044" watchObservedRunningTime="2026-02-24 14:55:28.610344985 +0000 UTC m=+390.229403518" Feb 24 14:55:28 crc kubenswrapper[4982]: I0224 14:55:28.775099 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7ff5bf444c-h46bd"] Feb 24 14:55:28 crc kubenswrapper[4982]: I0224 14:55:28.775603 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" podUID="7f4c0609-f755-4f29-8938-441d8f24a989" containerName="controller-manager" containerID="cri-o://8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c" gracePeriod=30 Feb 24 14:55:28 crc kubenswrapper[4982]: I0224 14:55:28.792485 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs"] Feb 24 14:55:28 crc kubenswrapper[4982]: I0224 14:55:28.792771 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" podUID="910b9372-e222-4848-9e2a-472d37444c81" containerName="route-controller-manager" containerID="cri-o://127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e" gracePeriod=30 Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.325440 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.379280 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451258 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4dxk\" (UniqueName: \"kubernetes.io/projected/910b9372-e222-4848-9e2a-472d37444c81-kube-api-access-g4dxk\") pod \"910b9372-e222-4848-9e2a-472d37444c81\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451306 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-config\") pod \"7f4c0609-f755-4f29-8938-441d8f24a989\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451337 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdhxv\" (UniqueName: \"kubernetes.io/projected/7f4c0609-f755-4f29-8938-441d8f24a989-kube-api-access-qdhxv\") pod \"7f4c0609-f755-4f29-8938-441d8f24a989\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451369 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-proxy-ca-bundles\") pod \"7f4c0609-f755-4f29-8938-441d8f24a989\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451409 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-config\") pod \"910b9372-e222-4848-9e2a-472d37444c81\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451429 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-client-ca\") pod \"910b9372-e222-4848-9e2a-472d37444c81\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451477 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4c0609-f755-4f29-8938-441d8f24a989-serving-cert\") pod \"7f4c0609-f755-4f29-8938-441d8f24a989\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451507 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-client-ca\") pod \"7f4c0609-f755-4f29-8938-441d8f24a989\" (UID: \"7f4c0609-f755-4f29-8938-441d8f24a989\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.451596 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910b9372-e222-4848-9e2a-472d37444c81-serving-cert\") pod \"910b9372-e222-4848-9e2a-472d37444c81\" (UID: \"910b9372-e222-4848-9e2a-472d37444c81\") " Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.452438 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-client-ca" (OuterVolumeSpecName: "client-ca") pod "910b9372-e222-4848-9e2a-472d37444c81" (UID: "910b9372-e222-4848-9e2a-472d37444c81"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.452471 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7f4c0609-f755-4f29-8938-441d8f24a989" (UID: "7f4c0609-f755-4f29-8938-441d8f24a989"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.452577 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-config" (OuterVolumeSpecName: "config") pod "910b9372-e222-4848-9e2a-472d37444c81" (UID: "910b9372-e222-4848-9e2a-472d37444c81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.453068 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-config" (OuterVolumeSpecName: "config") pod "7f4c0609-f755-4f29-8938-441d8f24a989" (UID: "7f4c0609-f755-4f29-8938-441d8f24a989"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.453305 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f4c0609-f755-4f29-8938-441d8f24a989" (UID: "7f4c0609-f755-4f29-8938-441d8f24a989"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.456730 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4c0609-f755-4f29-8938-441d8f24a989-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f4c0609-f755-4f29-8938-441d8f24a989" (UID: "7f4c0609-f755-4f29-8938-441d8f24a989"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.456803 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/910b9372-e222-4848-9e2a-472d37444c81-kube-api-access-g4dxk" (OuterVolumeSpecName: "kube-api-access-g4dxk") pod "910b9372-e222-4848-9e2a-472d37444c81" (UID: "910b9372-e222-4848-9e2a-472d37444c81"). InnerVolumeSpecName "kube-api-access-g4dxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.456912 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/910b9372-e222-4848-9e2a-472d37444c81-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "910b9372-e222-4848-9e2a-472d37444c81" (UID: "910b9372-e222-4848-9e2a-472d37444c81"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.458155 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4c0609-f755-4f29-8938-441d8f24a989-kube-api-access-qdhxv" (OuterVolumeSpecName: "kube-api-access-qdhxv") pod "7f4c0609-f755-4f29-8938-441d8f24a989" (UID: "7f4c0609-f755-4f29-8938-441d8f24a989"). InnerVolumeSpecName "kube-api-access-qdhxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.552923 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f4c0609-f755-4f29-8938-441d8f24a989-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.552957 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.552970 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/910b9372-e222-4848-9e2a-472d37444c81-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.552982 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4dxk\" (UniqueName: \"kubernetes.io/projected/910b9372-e222-4848-9e2a-472d37444c81-kube-api-access-g4dxk\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.552995 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.553008 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdhxv\" (UniqueName: \"kubernetes.io/projected/7f4c0609-f755-4f29-8938-441d8f24a989-kube-api-access-qdhxv\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.553019 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f4c0609-f755-4f29-8938-441d8f24a989-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.553034 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.553044 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/910b9372-e222-4848-9e2a-472d37444c81-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.572406 4982 generic.go:334] "Generic (PLEG): container finished" podID="910b9372-e222-4848-9e2a-472d37444c81" containerID="127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e" exitCode=0 Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.572487 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.572489 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" event={"ID":"910b9372-e222-4848-9e2a-472d37444c81","Type":"ContainerDied","Data":"127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e"} Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.572669 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs" event={"ID":"910b9372-e222-4848-9e2a-472d37444c81","Type":"ContainerDied","Data":"2b5007c8a7d756238ed6b2f08379e35432a4f9983f4907590f7b7307fa4e5d0c"} Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.572698 4982 scope.go:117] "RemoveContainer" containerID="127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.573746 4982 generic.go:334] "Generic (PLEG): container finished" podID="7f4c0609-f755-4f29-8938-441d8f24a989" containerID="8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c" exitCode=0 Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.573900 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" event={"ID":"7f4c0609-f755-4f29-8938-441d8f24a989","Type":"ContainerDied","Data":"8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c"} Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.573936 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" event={"ID":"7f4c0609-f755-4f29-8938-441d8f24a989","Type":"ContainerDied","Data":"35bb8506ae938fd8f0fbc9e3bd2af4731ff29debfcdac19a101ec8a1335c8009"} Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.574055 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7ff5bf444c-h46bd" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.588405 4982 scope.go:117] "RemoveContainer" containerID="127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e" Feb 24 14:55:29 crc kubenswrapper[4982]: E0224 14:55:29.588823 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e\": container with ID starting with 127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e not found: ID does not exist" containerID="127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.588855 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e"} err="failed to get container status \"127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e\": rpc error: code = NotFound desc = could not find container \"127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e\": container with ID starting with 127d5c9bba660384ab684fe415a334c6ac6f429a090677286ca0bccd6f3a0d0e not found: ID does not exist" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.588874 4982 scope.go:117] "RemoveContainer" containerID="8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.605624 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs"] Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.610766 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c679cff5-rftrs"] Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.614100 4982 scope.go:117] "RemoveContainer" containerID="8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c" Feb 24 14:55:29 crc kubenswrapper[4982]: E0224 14:55:29.614646 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c\": container with ID starting with 8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c not found: ID does not exist" containerID="8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.614681 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c"} err="failed to get container status \"8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c\": rpc error: code = NotFound desc = could not find container \"8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c\": container with ID starting with 8c79ec5dc2edc304b419ec93194e2193e1cda156eba2356a08ac2f9c1cad535c not found: ID does not exist" Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.614983 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7ff5bf444c-h46bd"] Feb 24 14:55:29 crc kubenswrapper[4982]: I0224 14:55:29.619804 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7ff5bf444c-h46bd"] Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.093799 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-vgvvl"] Feb 24 14:55:30 crc kubenswrapper[4982]: E0224 14:55:30.094338 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910b9372-e222-4848-9e2a-472d37444c81" containerName="route-controller-manager" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.094450 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="910b9372-e222-4848-9e2a-472d37444c81" containerName="route-controller-manager" Feb 24 14:55:30 crc kubenswrapper[4982]: E0224 14:55:30.094670 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4c0609-f755-4f29-8938-441d8f24a989" containerName="controller-manager" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.094751 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4c0609-f755-4f29-8938-441d8f24a989" containerName="controller-manager" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.094934 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4c0609-f755-4f29-8938-441d8f24a989" containerName="controller-manager" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.095036 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="910b9372-e222-4848-9e2a-472d37444c81" containerName="route-controller-manager" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.095553 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.097929 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.098231 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.099061 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.099079 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.102227 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.103412 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.107207 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.107757 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h"] Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.109699 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.117422 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.117939 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-vgvvl"] Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.117950 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.118241 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.118370 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.118442 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.118683 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.142333 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h"] Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160448 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/060c7173-c251-476e-beda-b4b3b829594f-client-ca\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160508 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-config\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160575 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-client-ca\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160669 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfjv8\" (UniqueName: \"kubernetes.io/projected/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-kube-api-access-mfjv8\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160727 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060c7173-c251-476e-beda-b4b3b829594f-config\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160769 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/060c7173-c251-476e-beda-b4b3b829594f-serving-cert\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160818 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-serving-cert\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160856 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtsnf\" (UniqueName: \"kubernetes.io/projected/060c7173-c251-476e-beda-b4b3b829594f-kube-api-access-wtsnf\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.160890 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-proxy-ca-bundles\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.261923 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfjv8\" (UniqueName: \"kubernetes.io/projected/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-kube-api-access-mfjv8\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.261971 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060c7173-c251-476e-beda-b4b3b829594f-config\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.261994 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/060c7173-c251-476e-beda-b4b3b829594f-serving-cert\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.262020 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-serving-cert\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.262044 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtsnf\" (UniqueName: \"kubernetes.io/projected/060c7173-c251-476e-beda-b4b3b829594f-kube-api-access-wtsnf\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.262068 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-proxy-ca-bundles\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.262101 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/060c7173-c251-476e-beda-b4b3b829594f-client-ca\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.262130 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-config\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.262157 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-client-ca\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.263106 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-client-ca\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.263289 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060c7173-c251-476e-beda-b4b3b829594f-config\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.263934 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/060c7173-c251-476e-beda-b4b3b829594f-client-ca\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.264149 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-proxy-ca-bundles\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.265066 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-config\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.268300 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-serving-cert\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.268300 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/060c7173-c251-476e-beda-b4b3b829594f-serving-cert\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.275968 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfjv8\" (UniqueName: \"kubernetes.io/projected/49a0cc33-16f3-4f58-9c53-9a0b5376b6a6-kube-api-access-mfjv8\") pod \"controller-manager-69fc857fc8-vgvvl\" (UID: \"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6\") " pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.281155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtsnf\" (UniqueName: \"kubernetes.io/projected/060c7173-c251-476e-beda-b4b3b829594f-kube-api-access-wtsnf\") pod \"route-controller-manager-7c86799fb7-zc64h\" (UID: \"060c7173-c251-476e-beda-b4b3b829594f\") " pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.426770 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.439866 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.755703 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h"] Feb 24 14:55:30 crc kubenswrapper[4982]: I0224 14:55:30.890920 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fc857fc8-vgvvl"] Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.151222 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4c0609-f755-4f29-8938-441d8f24a989" path="/var/lib/kubelet/pods/7f4c0609-f755-4f29-8938-441d8f24a989/volumes" Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.152128 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="910b9372-e222-4848-9e2a-472d37444c81" path="/var/lib/kubelet/pods/910b9372-e222-4848-9e2a-472d37444c81/volumes" Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.596306 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" event={"ID":"060c7173-c251-476e-beda-b4b3b829594f","Type":"ContainerStarted","Data":"f7b1a4f8477c515c961a9c2adb34052fad47bbc396159699128b4136520acd29"} Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.596856 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" event={"ID":"060c7173-c251-476e-beda-b4b3b829594f","Type":"ContainerStarted","Data":"47b10a9c84ae2c86ee35f4e14d98869f8e171000749a6e1bfeb716f4540183e6"} Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.597281 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.598703 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" event={"ID":"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6","Type":"ContainerStarted","Data":"c6db261dc6a60d532ec11376f7246e97a890a4a52f24f65ade0c78e0d9af4c18"} Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.598734 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" event={"ID":"49a0cc33-16f3-4f58-9c53-9a0b5376b6a6","Type":"ContainerStarted","Data":"5568ec017eb9cf4a59c7f76540d4cb3a78945ffe51e9bc2d3ea106c7e06550ac"} Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.599296 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.605206 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.605769 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.628250 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c86799fb7-zc64h" podStartSLOduration=3.628230943 podStartE2EDuration="3.628230943s" podCreationTimestamp="2026-02-24 14:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:55:31.623933429 +0000 UTC m=+393.242991932" watchObservedRunningTime="2026-02-24 14:55:31.628230943 +0000 UTC m=+393.247289446" Feb 24 14:55:31 crc kubenswrapper[4982]: I0224 14:55:31.687077 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69fc857fc8-vgvvl" podStartSLOduration=3.687053723 podStartE2EDuration="3.687053723s" podCreationTimestamp="2026-02-24 14:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:55:31.68206998 +0000 UTC m=+393.301128503" watchObservedRunningTime="2026-02-24 14:55:31.687053723 +0000 UTC m=+393.306112246" Feb 24 14:55:47 crc kubenswrapper[4982]: I0224 14:55:47.560979 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qqqvv" Feb 24 14:55:47 crc kubenswrapper[4982]: I0224 14:55:47.623782 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9wbq"] Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.156091 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532416-bztmv"] Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.160474 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532416-bztmv" Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.163881 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.163922 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.166242 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.168106 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532416-bztmv"] Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.327981 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/0fd23d9a-2f6f-4b85-beeb-514cf322d5ba-kube-api-access-skrs7\") pod \"auto-csr-approver-29532416-bztmv\" (UID: \"0fd23d9a-2f6f-4b85-beeb-514cf322d5ba\") " pod="openshift-infra/auto-csr-approver-29532416-bztmv" Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.429934 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/0fd23d9a-2f6f-4b85-beeb-514cf322d5ba-kube-api-access-skrs7\") pod \"auto-csr-approver-29532416-bztmv\" (UID: \"0fd23d9a-2f6f-4b85-beeb-514cf322d5ba\") " pod="openshift-infra/auto-csr-approver-29532416-bztmv" Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.453876 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/0fd23d9a-2f6f-4b85-beeb-514cf322d5ba-kube-api-access-skrs7\") pod \"auto-csr-approver-29532416-bztmv\" (UID: \"0fd23d9a-2f6f-4b85-beeb-514cf322d5ba\") " pod="openshift-infra/auto-csr-approver-29532416-bztmv" Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.488814 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532416-bztmv" Feb 24 14:56:00 crc kubenswrapper[4982]: I0224 14:56:00.972878 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532416-bztmv"] Feb 24 14:56:01 crc kubenswrapper[4982]: I0224 14:56:01.833665 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532416-bztmv" event={"ID":"0fd23d9a-2f6f-4b85-beeb-514cf322d5ba","Type":"ContainerStarted","Data":"4dd5ebe7ac6fff40ec8a0f9c9030474bf887b00e07ffa8db3723f57f26652c4a"} Feb 24 14:56:02 crc kubenswrapper[4982]: I0224 14:56:02.842976 4982 generic.go:334] "Generic (PLEG): container finished" podID="0fd23d9a-2f6f-4b85-beeb-514cf322d5ba" containerID="c256484a9834d3a07f592fbf13e96fd6c8dc223bb16c915cc08d38ff3250bc82" exitCode=0 Feb 24 14:56:02 crc kubenswrapper[4982]: I0224 14:56:02.843048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532416-bztmv" event={"ID":"0fd23d9a-2f6f-4b85-beeb-514cf322d5ba","Type":"ContainerDied","Data":"c256484a9834d3a07f592fbf13e96fd6c8dc223bb16c915cc08d38ff3250bc82"} Feb 24 14:56:04 crc kubenswrapper[4982]: I0224 14:56:04.329818 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532416-bztmv" Feb 24 14:56:04 crc kubenswrapper[4982]: I0224 14:56:04.489558 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/0fd23d9a-2f6f-4b85-beeb-514cf322d5ba-kube-api-access-skrs7\") pod \"0fd23d9a-2f6f-4b85-beeb-514cf322d5ba\" (UID: \"0fd23d9a-2f6f-4b85-beeb-514cf322d5ba\") " Feb 24 14:56:04 crc kubenswrapper[4982]: I0224 14:56:04.501797 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd23d9a-2f6f-4b85-beeb-514cf322d5ba-kube-api-access-skrs7" (OuterVolumeSpecName: "kube-api-access-skrs7") pod "0fd23d9a-2f6f-4b85-beeb-514cf322d5ba" (UID: "0fd23d9a-2f6f-4b85-beeb-514cf322d5ba"). InnerVolumeSpecName "kube-api-access-skrs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:04 crc kubenswrapper[4982]: I0224 14:56:04.591753 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skrs7\" (UniqueName: \"kubernetes.io/projected/0fd23d9a-2f6f-4b85-beeb-514cf322d5ba-kube-api-access-skrs7\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:04 crc kubenswrapper[4982]: I0224 14:56:04.859966 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532416-bztmv" event={"ID":"0fd23d9a-2f6f-4b85-beeb-514cf322d5ba","Type":"ContainerDied","Data":"4dd5ebe7ac6fff40ec8a0f9c9030474bf887b00e07ffa8db3723f57f26652c4a"} Feb 24 14:56:04 crc kubenswrapper[4982]: I0224 14:56:04.860442 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd5ebe7ac6fff40ec8a0f9c9030474bf887b00e07ffa8db3723f57f26652c4a" Feb 24 14:56:04 crc kubenswrapper[4982]: I0224 14:56:04.860063 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532416-bztmv" Feb 24 14:56:08 crc kubenswrapper[4982]: I0224 14:56:08.738652 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 14:56:08 crc kubenswrapper[4982]: I0224 14:56:08.739231 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 14:56:12 crc kubenswrapper[4982]: I0224 14:56:12.664626 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" podUID="f31bded6-f3d5-42b4-b479-3c01ce30e73a" containerName="registry" containerID="cri-o://1cffe8b5992aa18c0107bfd265e93a129477a436d344260e03ab3d38c895b4de" gracePeriod=30 Feb 24 14:56:12 crc kubenswrapper[4982]: I0224 14:56:12.924178 4982 generic.go:334] "Generic (PLEG): container finished" podID="f31bded6-f3d5-42b4-b479-3c01ce30e73a" containerID="1cffe8b5992aa18c0107bfd265e93a129477a436d344260e03ab3d38c895b4de" exitCode=0 Feb 24 14:56:12 crc kubenswrapper[4982]: I0224 14:56:12.924481 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" event={"ID":"f31bded6-f3d5-42b4-b479-3c01ce30e73a","Type":"ContainerDied","Data":"1cffe8b5992aa18c0107bfd265e93a129477a436d344260e03ab3d38c895b4de"} Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.121607 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.236230 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.236280 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-bound-sa-token\") pod \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.236313 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f31bded6-f3d5-42b4-b479-3c01ce30e73a-ca-trust-extracted\") pod \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.236357 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxjgp\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-kube-api-access-cxjgp\") pod \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.236379 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-tls\") pod \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.236409 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-trusted-ca\") pod \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.236442 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f31bded6-f3d5-42b4-b479-3c01ce30e73a-installation-pull-secrets\") pod \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.236562 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-certificates\") pod \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\" (UID: \"f31bded6-f3d5-42b4-b479-3c01ce30e73a\") " Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.237469 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f31bded6-f3d5-42b4-b479-3c01ce30e73a" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.237655 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f31bded6-f3d5-42b4-b479-3c01ce30e73a" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.243515 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-kube-api-access-cxjgp" (OuterVolumeSpecName: "kube-api-access-cxjgp") pod "f31bded6-f3d5-42b4-b479-3c01ce30e73a" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a"). InnerVolumeSpecName "kube-api-access-cxjgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.245544 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31bded6-f3d5-42b4-b479-3c01ce30e73a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f31bded6-f3d5-42b4-b479-3c01ce30e73a" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.251643 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f31bded6-f3d5-42b4-b479-3c01ce30e73a" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.251926 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f31bded6-f3d5-42b4-b479-3c01ce30e73a" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.258316 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f31bded6-f3d5-42b4-b479-3c01ce30e73a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f31bded6-f3d5-42b4-b479-3c01ce30e73a" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.259782 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f31bded6-f3d5-42b4-b479-3c01ce30e73a" (UID: "f31bded6-f3d5-42b4-b479-3c01ce30e73a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.338066 4982 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.338123 4982 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f31bded6-f3d5-42b4-b479-3c01ce30e73a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.338145 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxjgp\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-kube-api-access-cxjgp\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.338167 4982 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.338184 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.338201 4982 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f31bded6-f3d5-42b4-b479-3c01ce30e73a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.338219 4982 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f31bded6-f3d5-42b4-b479-3c01ce30e73a-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.932930 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" event={"ID":"f31bded6-f3d5-42b4-b479-3c01ce30e73a","Type":"ContainerDied","Data":"815feb4ea140d82d980ace17f3473d1e7352fdf54abd47bb7438f285d33805e9"} Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.933017 4982 scope.go:117] "RemoveContainer" containerID="1cffe8b5992aa18c0107bfd265e93a129477a436d344260e03ab3d38c895b4de" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.933035 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n9wbq" Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.982279 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9wbq"] Feb 24 14:56:13 crc kubenswrapper[4982]: I0224 14:56:13.987244 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n9wbq"] Feb 24 14:56:15 crc kubenswrapper[4982]: I0224 14:56:15.158739 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31bded6-f3d5-42b4-b479-3c01ce30e73a" path="/var/lib/kubelet/pods/f31bded6-f3d5-42b4-b479-3c01ce30e73a/volumes" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.515093 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stdjb"] Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.515902 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stdjb" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerName="registry-server" containerID="cri-o://49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761" gracePeriod=30 Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.538848 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vndg2"] Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.539063 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vndg2" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerName="registry-server" containerID="cri-o://ad027a7c2604115a76c210314f8e96e20608e50676104392ff08f014a99e575c" gracePeriod=30 Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.558983 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zv2x7"] Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.559265 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" podUID="34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" containerName="marketplace-operator" containerID="cri-o://1ff30d486c07fb0d262eae9e14822667822d7b069406949f58abaac427db01fd" gracePeriod=30 Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.567639 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzclt"] Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.567999 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rzclt" podUID="6f482128-8f1f-43bc-b715-03049473f155" containerName="registry-server" containerID="cri-o://21f25b7b3df41cd9c2ac09af30d904cf8da15f52894f1ea27461071ca796157e" gracePeriod=30 Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.572781 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8m2v"] Feb 24 14:56:32 crc kubenswrapper[4982]: E0224 14:56:32.573036 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd23d9a-2f6f-4b85-beeb-514cf322d5ba" containerName="oc" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.573052 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd23d9a-2f6f-4b85-beeb-514cf322d5ba" containerName="oc" Feb 24 14:56:32 crc kubenswrapper[4982]: E0224 14:56:32.573068 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31bded6-f3d5-42b4-b479-3c01ce30e73a" containerName="registry" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.573075 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31bded6-f3d5-42b4-b479-3c01ce30e73a" containerName="registry" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.573163 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31bded6-f3d5-42b4-b479-3c01ce30e73a" containerName="registry" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.573173 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd23d9a-2f6f-4b85-beeb-514cf322d5ba" containerName="oc" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.573615 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.576895 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvwws"] Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.577130 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xvwws" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="registry-server" containerID="cri-o://e06d51792bf74de0566b614c58673fefa982a507bc7142c43671e9080e46ab78" gracePeriod=30 Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.584377 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8m2v"] Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.715124 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.715327 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz7gb\" (UniqueName: \"kubernetes.io/projected/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-kube-api-access-kz7gb\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.715351 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.816380 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz7gb\" (UniqueName: \"kubernetes.io/projected/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-kube-api-access-kz7gb\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.816433 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.816511 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.818411 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.832253 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.833431 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz7gb\" (UniqueName: \"kubernetes.io/projected/cfc42f28-cff7-46a9-a4cb-1421f8e7e61e-kube-api-access-kz7gb\") pod \"marketplace-operator-79b997595-w8m2v\" (UID: \"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.971810 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:32 crc kubenswrapper[4982]: I0224 14:56:32.978768 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.062404 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerID="e06d51792bf74de0566b614c58673fefa982a507bc7142c43671e9080e46ab78" exitCode=0 Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.062567 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvwws" event={"ID":"ee9cd5d1-3ce1-4722-b3aa-892b33502443","Type":"ContainerDied","Data":"e06d51792bf74de0566b614c58673fefa982a507bc7142c43671e9080e46ab78"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.062651 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvwws" event={"ID":"ee9cd5d1-3ce1-4722-b3aa-892b33502443","Type":"ContainerDied","Data":"591ddb0d2146605148cff1fd6503046ee0a4a3b4e8d8b3dad9b38d4fb5c55b61"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.062670 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="591ddb0d2146605148cff1fd6503046ee0a4a3b4e8d8b3dad9b38d4fb5c55b61" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.064282 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.070262 4982 generic.go:334] "Generic (PLEG): container finished" podID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerID="ad027a7c2604115a76c210314f8e96e20608e50676104392ff08f014a99e575c" exitCode=0 Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.070355 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vndg2" event={"ID":"ddc9427b-029e-49c4-bce0-b2d40b9259c8","Type":"ContainerDied","Data":"ad027a7c2604115a76c210314f8e96e20608e50676104392ff08f014a99e575c"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.070385 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vndg2" event={"ID":"ddc9427b-029e-49c4-bce0-b2d40b9259c8","Type":"ContainerDied","Data":"b53af7dcdbb0904bd600bfae95ba1e99a9217c0a97efe1805ae746f835ff40dd"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.070405 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53af7dcdbb0904bd600bfae95ba1e99a9217c0a97efe1805ae746f835ff40dd" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.071328 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.072104 4982 generic.go:334] "Generic (PLEG): container finished" podID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerID="49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761" exitCode=0 Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.072134 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stdjb" event={"ID":"e41be673-ff4a-465b-a472-22f962fbf6ed","Type":"ContainerDied","Data":"49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.072148 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stdjb" event={"ID":"e41be673-ff4a-465b-a472-22f962fbf6ed","Type":"ContainerDied","Data":"cf5c23812bf17a113a04698647b6679bd2b67d08f371fb135c2107e05539d10d"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.072163 4982 scope.go:117] "RemoveContainer" containerID="49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.072238 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stdjb" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.074099 4982 generic.go:334] "Generic (PLEG): container finished" podID="34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" containerID="1ff30d486c07fb0d262eae9e14822667822d7b069406949f58abaac427db01fd" exitCode=0 Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.074134 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" event={"ID":"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd","Type":"ContainerDied","Data":"1ff30d486c07fb0d262eae9e14822667822d7b069406949f58abaac427db01fd"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.090343 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f482128-8f1f-43bc-b715-03049473f155" containerID="21f25b7b3df41cd9c2ac09af30d904cf8da15f52894f1ea27461071ca796157e" exitCode=0 Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.090391 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzclt" event={"ID":"6f482128-8f1f-43bc-b715-03049473f155","Type":"ContainerDied","Data":"21f25b7b3df41cd9c2ac09af30d904cf8da15f52894f1ea27461071ca796157e"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.090418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzclt" event={"ID":"6f482128-8f1f-43bc-b715-03049473f155","Type":"ContainerDied","Data":"e18db69d4813beb64f416fc5efbbe1a70b4dfd717315fba095573a5354cc8ec8"} Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.090433 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18db69d4813beb64f416fc5efbbe1a70b4dfd717315fba095573a5354cc8ec8" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.093489 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.095492 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.117781 4982 scope.go:117] "RemoveContainer" containerID="229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.120043 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58nmx\" (UniqueName: \"kubernetes.io/projected/e41be673-ff4a-465b-a472-22f962fbf6ed-kube-api-access-58nmx\") pod \"e41be673-ff4a-465b-a472-22f962fbf6ed\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.120161 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-catalog-content\") pod \"e41be673-ff4a-465b-a472-22f962fbf6ed\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.120228 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-utilities\") pod \"e41be673-ff4a-465b-a472-22f962fbf6ed\" (UID: \"e41be673-ff4a-465b-a472-22f962fbf6ed\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.124571 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-utilities" (OuterVolumeSpecName: "utilities") pod "e41be673-ff4a-465b-a472-22f962fbf6ed" (UID: "e41be673-ff4a-465b-a472-22f962fbf6ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.127875 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41be673-ff4a-465b-a472-22f962fbf6ed-kube-api-access-58nmx" (OuterVolumeSpecName: "kube-api-access-58nmx") pod "e41be673-ff4a-465b-a472-22f962fbf6ed" (UID: "e41be673-ff4a-465b-a472-22f962fbf6ed"). InnerVolumeSpecName "kube-api-access-58nmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.160537 4982 scope.go:117] "RemoveContainer" containerID="b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.174657 4982 scope.go:117] "RemoveContainer" containerID="49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761" Feb 24 14:56:33 crc kubenswrapper[4982]: E0224 14:56:33.175356 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761\": container with ID starting with 49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761 not found: ID does not exist" containerID="49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.175431 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761"} err="failed to get container status \"49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761\": rpc error: code = NotFound desc = could not find container \"49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761\": container with ID starting with 49f3bce38de14acd3ded49f69655c4964efc3ffbdbe96592a636d282e1ea2761 not found: ID does not exist" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.175489 4982 scope.go:117] "RemoveContainer" containerID="229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f" Feb 24 14:56:33 crc kubenswrapper[4982]: E0224 14:56:33.175809 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f\": container with ID starting with 229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f not found: ID does not exist" containerID="229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.175849 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f"} err="failed to get container status \"229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f\": rpc error: code = NotFound desc = could not find container \"229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f\": container with ID starting with 229bb416d26eb353fba0b6dfec6dfd5dcc8e8316246adb5b36dfaf9c6ac0d23f not found: ID does not exist" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.175863 4982 scope.go:117] "RemoveContainer" containerID="b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5" Feb 24 14:56:33 crc kubenswrapper[4982]: E0224 14:56:33.176475 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5\": container with ID starting with b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5 not found: ID does not exist" containerID="b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.176698 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5"} err="failed to get container status \"b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5\": rpc error: code = NotFound desc = could not find container \"b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5\": container with ID starting with b65b83ec8f9696c4740de0f688cec04e9aca794244ef1e6d7fe200898d9454b5 not found: ID does not exist" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.186582 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e41be673-ff4a-465b-a472-22f962fbf6ed" (UID: "e41be673-ff4a-465b-a472-22f962fbf6ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221304 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwrf4\" (UniqueName: \"kubernetes.io/projected/6f482128-8f1f-43bc-b715-03049473f155-kube-api-access-rwrf4\") pod \"6f482128-8f1f-43bc-b715-03049473f155\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221365 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-operator-metrics\") pod \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221410 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsqb9\" (UniqueName: \"kubernetes.io/projected/ddc9427b-029e-49c4-bce0-b2d40b9259c8-kube-api-access-lsqb9\") pod \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221457 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-catalog-content\") pod \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221486 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-utilities\") pod \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221529 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-catalog-content\") pod \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221550 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-trusted-ca\") pod \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221583 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-utilities\") pod \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\" (UID: \"ddc9427b-029e-49c4-bce0-b2d40b9259c8\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221608 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-catalog-content\") pod \"6f482128-8f1f-43bc-b715-03049473f155\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221632 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4fvm\" (UniqueName: \"kubernetes.io/projected/ee9cd5d1-3ce1-4722-b3aa-892b33502443-kube-api-access-h4fvm\") pod \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\" (UID: \"ee9cd5d1-3ce1-4722-b3aa-892b33502443\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221655 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8fbb\" (UniqueName: \"kubernetes.io/projected/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-kube-api-access-q8fbb\") pod \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\" (UID: \"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221699 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-utilities\") pod \"6f482128-8f1f-43bc-b715-03049473f155\" (UID: \"6f482128-8f1f-43bc-b715-03049473f155\") " Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221907 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221922 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41be673-ff4a-465b-a472-22f962fbf6ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.221932 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58nmx\" (UniqueName: \"kubernetes.io/projected/e41be673-ff4a-465b-a472-22f962fbf6ed-kube-api-access-58nmx\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.222573 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-utilities" (OuterVolumeSpecName: "utilities") pod "6f482128-8f1f-43bc-b715-03049473f155" (UID: "6f482128-8f1f-43bc-b715-03049473f155"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.223025 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" (UID: "34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.224014 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-utilities" (OuterVolumeSpecName: "utilities") pod "ee9cd5d1-3ce1-4722-b3aa-892b33502443" (UID: "ee9cd5d1-3ce1-4722-b3aa-892b33502443"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.224487 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f482128-8f1f-43bc-b715-03049473f155-kube-api-access-rwrf4" (OuterVolumeSpecName: "kube-api-access-rwrf4") pod "6f482128-8f1f-43bc-b715-03049473f155" (UID: "6f482128-8f1f-43bc-b715-03049473f155"). InnerVolumeSpecName "kube-api-access-rwrf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.224747 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9cd5d1-3ce1-4722-b3aa-892b33502443-kube-api-access-h4fvm" (OuterVolumeSpecName: "kube-api-access-h4fvm") pod "ee9cd5d1-3ce1-4722-b3aa-892b33502443" (UID: "ee9cd5d1-3ce1-4722-b3aa-892b33502443"). InnerVolumeSpecName "kube-api-access-h4fvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.225516 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-utilities" (OuterVolumeSpecName: "utilities") pod "ddc9427b-029e-49c4-bce0-b2d40b9259c8" (UID: "ddc9427b-029e-49c4-bce0-b2d40b9259c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.225722 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-kube-api-access-q8fbb" (OuterVolumeSpecName: "kube-api-access-q8fbb") pod "34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" (UID: "34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd"). InnerVolumeSpecName "kube-api-access-q8fbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.225772 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" (UID: "34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.225904 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc9427b-029e-49c4-bce0-b2d40b9259c8-kube-api-access-lsqb9" (OuterVolumeSpecName: "kube-api-access-lsqb9") pod "ddc9427b-029e-49c4-bce0-b2d40b9259c8" (UID: "ddc9427b-029e-49c4-bce0-b2d40b9259c8"). InnerVolumeSpecName "kube-api-access-lsqb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.253542 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f482128-8f1f-43bc-b715-03049473f155" (UID: "6f482128-8f1f-43bc-b715-03049473f155"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.290077 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddc9427b-029e-49c4-bce0-b2d40b9259c8" (UID: "ddc9427b-029e-49c4-bce0-b2d40b9259c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.322986 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323014 4982 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323024 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323033 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddc9427b-029e-49c4-bce0-b2d40b9259c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323056 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323068 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4fvm\" (UniqueName: \"kubernetes.io/projected/ee9cd5d1-3ce1-4722-b3aa-892b33502443-kube-api-access-h4fvm\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323079 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8fbb\" (UniqueName: \"kubernetes.io/projected/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-kube-api-access-q8fbb\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323088 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f482128-8f1f-43bc-b715-03049473f155-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323097 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwrf4\" (UniqueName: \"kubernetes.io/projected/6f482128-8f1f-43bc-b715-03049473f155-kube-api-access-rwrf4\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323106 4982 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.323132 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsqb9\" (UniqueName: \"kubernetes.io/projected/ddc9427b-029e-49c4-bce0-b2d40b9259c8-kube-api-access-lsqb9\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.355747 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee9cd5d1-3ce1-4722-b3aa-892b33502443" (UID: "ee9cd5d1-3ce1-4722-b3aa-892b33502443"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.400581 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8m2v"] Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.404438 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stdjb"] Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.408288 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-stdjb"] Feb 24 14:56:33 crc kubenswrapper[4982]: I0224 14:56:33.424568 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee9cd5d1-3ce1-4722-b3aa-892b33502443-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.101398 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" event={"ID":"34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd","Type":"ContainerDied","Data":"8792090edab40f7e1351f2624ad3bb3044f726d710dcab952e1eb4cbd705cc2b"} Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.101433 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zv2x7" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.103266 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvwws" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.103164 4982 scope.go:117] "RemoveContainer" containerID="1ff30d486c07fb0d262eae9e14822667822d7b069406949f58abaac427db01fd" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.103136 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" event={"ID":"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e","Type":"ContainerStarted","Data":"5513f7558d667a82fb799883840a2ff2ab735ed48b3e289079730d92128a3d8d"} Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.103363 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" event={"ID":"cfc42f28-cff7-46a9-a4cb-1421f8e7e61e","Type":"ContainerStarted","Data":"21e013d53390344f47a0cbcab927e895f88d216f1e7eda7f46d1242162a5b9f6"} Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.103255 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vndg2" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.103561 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzclt" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.104816 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.113949 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.148619 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w8m2v" podStartSLOduration=2.148592034 podStartE2EDuration="2.148592034s" podCreationTimestamp="2026-02-24 14:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:56:34.1332325 +0000 UTC m=+455.752291053" watchObservedRunningTime="2026-02-24 14:56:34.148592034 +0000 UTC m=+455.767650567" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.153974 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvwws"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.160935 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xvwws"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.172745 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzclt"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.175490 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzclt"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.242018 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zv2x7"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.249493 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zv2x7"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.256391 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vndg2"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.260305 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vndg2"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734634 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fzc7v"] Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734813 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="extract-utilities" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734824 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="extract-utilities" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734836 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" containerName="marketplace-operator" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734842 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" containerName="marketplace-operator" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734850 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734857 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734865 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerName="extract-content" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734870 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerName="extract-content" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734877 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerName="extract-utilities" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734882 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerName="extract-utilities" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734890 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f482128-8f1f-43bc-b715-03049473f155" containerName="extract-utilities" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734896 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f482128-8f1f-43bc-b715-03049473f155" containerName="extract-utilities" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734903 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="extract-content" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734909 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="extract-content" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734920 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerName="extract-content" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734925 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerName="extract-content" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734934 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734940 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734948 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734955 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734963 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f482128-8f1f-43bc-b715-03049473f155" containerName="extract-content" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734970 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f482128-8f1f-43bc-b715-03049473f155" containerName="extract-content" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734979 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerName="extract-utilities" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734985 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerName="extract-utilities" Feb 24 14:56:34 crc kubenswrapper[4982]: E0224 14:56:34.734991 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f482128-8f1f-43bc-b715-03049473f155" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.734998 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f482128-8f1f-43bc-b715-03049473f155" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.735075 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.735092 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.735100 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.735108 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" containerName="marketplace-operator" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.735116 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f482128-8f1f-43bc-b715-03049473f155" containerName="registry-server" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.735734 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.741696 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.747894 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d08aa9d-495f-4693-86d2-240687656356-utilities\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.747992 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsjrw\" (UniqueName: \"kubernetes.io/projected/6d08aa9d-495f-4693-86d2-240687656356-kube-api-access-bsjrw\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.748127 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d08aa9d-495f-4693-86d2-240687656356-catalog-content\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.754170 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzc7v"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.849220 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d08aa9d-495f-4693-86d2-240687656356-utilities\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.849534 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsjrw\" (UniqueName: \"kubernetes.io/projected/6d08aa9d-495f-4693-86d2-240687656356-kube-api-access-bsjrw\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.849583 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d08aa9d-495f-4693-86d2-240687656356-catalog-content\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.849967 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d08aa9d-495f-4693-86d2-240687656356-utilities\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.850219 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d08aa9d-495f-4693-86d2-240687656356-catalog-content\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.877191 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsjrw\" (UniqueName: \"kubernetes.io/projected/6d08aa9d-495f-4693-86d2-240687656356-kube-api-access-bsjrw\") pod \"certified-operators-fzc7v\" (UID: \"6d08aa9d-495f-4693-86d2-240687656356\") " pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.933793 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h5bts"] Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.934662 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.936278 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.949917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkrv\" (UniqueName: \"kubernetes.io/projected/94c42884-373a-42f1-91f4-1949f4a8fbe8-kube-api-access-jlkrv\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.949981 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94c42884-373a-42f1-91f4-1949f4a8fbe8-utilities\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.950205 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94c42884-373a-42f1-91f4-1949f4a8fbe8-catalog-content\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:34 crc kubenswrapper[4982]: I0224 14:56:34.955197 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5bts"] Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.068026 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94c42884-373a-42f1-91f4-1949f4a8fbe8-utilities\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.068130 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94c42884-373a-42f1-91f4-1949f4a8fbe8-catalog-content\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.068175 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlkrv\" (UniqueName: \"kubernetes.io/projected/94c42884-373a-42f1-91f4-1949f4a8fbe8-kube-api-access-jlkrv\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.068173 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.068678 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94c42884-373a-42f1-91f4-1949f4a8fbe8-catalog-content\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.069012 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94c42884-373a-42f1-91f4-1949f4a8fbe8-utilities\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.099989 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlkrv\" (UniqueName: \"kubernetes.io/projected/94c42884-373a-42f1-91f4-1949f4a8fbe8-kube-api-access-jlkrv\") pod \"redhat-marketplace-h5bts\" (UID: \"94c42884-373a-42f1-91f4-1949f4a8fbe8\") " pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.152050 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd" path="/var/lib/kubelet/pods/34a4ec9b-e5ee-4d44-a78c-8f82be65cfbd/volumes" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.152761 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f482128-8f1f-43bc-b715-03049473f155" path="/var/lib/kubelet/pods/6f482128-8f1f-43bc-b715-03049473f155/volumes" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.153569 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc9427b-029e-49c4-bce0-b2d40b9259c8" path="/var/lib/kubelet/pods/ddc9427b-029e-49c4-bce0-b2d40b9259c8/volumes" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.155838 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41be673-ff4a-465b-a472-22f962fbf6ed" path="/var/lib/kubelet/pods/e41be673-ff4a-465b-a472-22f962fbf6ed/volumes" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.156767 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee9cd5d1-3ce1-4722-b3aa-892b33502443" path="/var/lib/kubelet/pods/ee9cd5d1-3ce1-4722-b3aa-892b33502443/volumes" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.259214 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.468993 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzc7v"] Feb 24 14:56:35 crc kubenswrapper[4982]: W0224 14:56:35.472721 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d08aa9d_495f_4693_86d2_240687656356.slice/crio-de0f3f24cdfa3e0e7f5b73040013463336226c1a7a533e36c21569402eb9bbf8 WatchSource:0}: Error finding container de0f3f24cdfa3e0e7f5b73040013463336226c1a7a533e36c21569402eb9bbf8: Status 404 returned error can't find the container with id de0f3f24cdfa3e0e7f5b73040013463336226c1a7a533e36c21569402eb9bbf8 Feb 24 14:56:35 crc kubenswrapper[4982]: I0224 14:56:35.668744 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h5bts"] Feb 24 14:56:35 crc kubenswrapper[4982]: W0224 14:56:35.679774 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94c42884_373a_42f1_91f4_1949f4a8fbe8.slice/crio-257d191a038ec18f8a41cb061472b6e52002e7985df60c8f78a3e7441d0ca740 WatchSource:0}: Error finding container 257d191a038ec18f8a41cb061472b6e52002e7985df60c8f78a3e7441d0ca740: Status 404 returned error can't find the container with id 257d191a038ec18f8a41cb061472b6e52002e7985df60c8f78a3e7441d0ca740 Feb 24 14:56:36 crc kubenswrapper[4982]: I0224 14:56:36.116898 4982 generic.go:334] "Generic (PLEG): container finished" podID="6d08aa9d-495f-4693-86d2-240687656356" containerID="1b6ac3d1d3786960ec759e61dfae659d7ca129924c0e3f02ced99d1564c25840" exitCode=0 Feb 24 14:56:36 crc kubenswrapper[4982]: I0224 14:56:36.116992 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc7v" event={"ID":"6d08aa9d-495f-4693-86d2-240687656356","Type":"ContainerDied","Data":"1b6ac3d1d3786960ec759e61dfae659d7ca129924c0e3f02ced99d1564c25840"} Feb 24 14:56:36 crc kubenswrapper[4982]: I0224 14:56:36.117022 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc7v" event={"ID":"6d08aa9d-495f-4693-86d2-240687656356","Type":"ContainerStarted","Data":"de0f3f24cdfa3e0e7f5b73040013463336226c1a7a533e36c21569402eb9bbf8"} Feb 24 14:56:36 crc kubenswrapper[4982]: I0224 14:56:36.118626 4982 generic.go:334] "Generic (PLEG): container finished" podID="94c42884-373a-42f1-91f4-1949f4a8fbe8" containerID="0172f15cd2ac5ff7b52a7e92a73ad90d945f3a41a8d95b5b59c344352ed46bde" exitCode=0 Feb 24 14:56:36 crc kubenswrapper[4982]: I0224 14:56:36.120538 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5bts" event={"ID":"94c42884-373a-42f1-91f4-1949f4a8fbe8","Type":"ContainerDied","Data":"0172f15cd2ac5ff7b52a7e92a73ad90d945f3a41a8d95b5b59c344352ed46bde"} Feb 24 14:56:36 crc kubenswrapper[4982]: I0224 14:56:36.120566 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5bts" event={"ID":"94c42884-373a-42f1-91f4-1949f4a8fbe8","Type":"ContainerStarted","Data":"257d191a038ec18f8a41cb061472b6e52002e7985df60c8f78a3e7441d0ca740"} Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.134779 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f82l9"] Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.136947 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc7v" event={"ID":"6d08aa9d-495f-4693-86d2-240687656356","Type":"ContainerStarted","Data":"dd9ded27d8b1ee3c3c8327db21b437af9f10d2086326f3f9b6cf6376ef536906"} Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.137048 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.139296 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.140466 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5bts" event={"ID":"94c42884-373a-42f1-91f4-1949f4a8fbe8","Type":"ContainerStarted","Data":"006374f00191526ba23be766e09abe318475aa141fce8d5546fde1aadd5778d2"} Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.157256 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f82l9"] Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.196820 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-catalog-content\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.196901 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-utilities\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.196941 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72xhd\" (UniqueName: \"kubernetes.io/projected/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-kube-api-access-72xhd\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.297627 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-utilities\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.297679 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72xhd\" (UniqueName: \"kubernetes.io/projected/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-kube-api-access-72xhd\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.297746 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-catalog-content\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.298108 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-utilities\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.298164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-catalog-content\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.316435 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72xhd\" (UniqueName: \"kubernetes.io/projected/6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f-kube-api-access-72xhd\") pod \"community-operators-f82l9\" (UID: \"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f\") " pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.339531 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tw86r"] Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.340463 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.342611 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.354763 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tw86r"] Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.398291 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45qb\" (UniqueName: \"kubernetes.io/projected/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-kube-api-access-b45qb\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.398358 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-utilities\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.398493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-catalog-content\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.463960 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.499419 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45qb\" (UniqueName: \"kubernetes.io/projected/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-kube-api-access-b45qb\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.499526 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-utilities\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.499586 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-catalog-content\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.500549 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-utilities\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.500590 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-catalog-content\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.519402 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45qb\" (UniqueName: \"kubernetes.io/projected/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-kube-api-access-b45qb\") pod \"redhat-operators-tw86r\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.666885 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:37 crc kubenswrapper[4982]: I0224 14:56:37.910900 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f82l9"] Feb 24 14:56:37 crc kubenswrapper[4982]: W0224 14:56:37.915771 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c58b3d9_b12e_4b69_9dcd_0c5b6b96f00f.slice/crio-8723a365957e728259b9914b5dc246c68b112b925823edee561d845a236f57e5 WatchSource:0}: Error finding container 8723a365957e728259b9914b5dc246c68b112b925823edee561d845a236f57e5: Status 404 returned error can't find the container with id 8723a365957e728259b9914b5dc246c68b112b925823edee561d845a236f57e5 Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.056447 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tw86r"] Feb 24 14:56:38 crc kubenswrapper[4982]: W0224 14:56:38.057940 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod031f25c3_ccd3_4aa9_815e_9a61baa6ecf5.slice/crio-203226fc99592d59dbdf8d590e9291a369418a2136879982e2f607dc15616d0d WatchSource:0}: Error finding container 203226fc99592d59dbdf8d590e9291a369418a2136879982e2f607dc15616d0d: Status 404 returned error can't find the container with id 203226fc99592d59dbdf8d590e9291a369418a2136879982e2f607dc15616d0d Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.147442 4982 generic.go:334] "Generic (PLEG): container finished" podID="94c42884-373a-42f1-91f4-1949f4a8fbe8" containerID="006374f00191526ba23be766e09abe318475aa141fce8d5546fde1aadd5778d2" exitCode=0 Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.147548 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5bts" event={"ID":"94c42884-373a-42f1-91f4-1949f4a8fbe8","Type":"ContainerDied","Data":"006374f00191526ba23be766e09abe318475aa141fce8d5546fde1aadd5778d2"} Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.149487 4982 generic.go:334] "Generic (PLEG): container finished" podID="6d08aa9d-495f-4693-86d2-240687656356" containerID="dd9ded27d8b1ee3c3c8327db21b437af9f10d2086326f3f9b6cf6376ef536906" exitCode=0 Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.149553 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc7v" event={"ID":"6d08aa9d-495f-4693-86d2-240687656356","Type":"ContainerDied","Data":"dd9ded27d8b1ee3c3c8327db21b437af9f10d2086326f3f9b6cf6376ef536906"} Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.150688 4982 generic.go:334] "Generic (PLEG): container finished" podID="6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f" containerID="f19532703171d24a11ea1785f249f46b547e365e8b3926fb9d062fedac83136f" exitCode=0 Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.150757 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f82l9" event={"ID":"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f","Type":"ContainerDied","Data":"f19532703171d24a11ea1785f249f46b547e365e8b3926fb9d062fedac83136f"} Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.150780 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f82l9" event={"ID":"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f","Type":"ContainerStarted","Data":"8723a365957e728259b9914b5dc246c68b112b925823edee561d845a236f57e5"} Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.151905 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw86r" event={"ID":"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5","Type":"ContainerStarted","Data":"203226fc99592d59dbdf8d590e9291a369418a2136879982e2f607dc15616d0d"} Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.739103 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 14:56:38 crc kubenswrapper[4982]: I0224 14:56:38.739647 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 14:56:39 crc kubenswrapper[4982]: I0224 14:56:39.157760 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f82l9" event={"ID":"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f","Type":"ContainerStarted","Data":"e46824afe5ffbf8cbabd198b0cc3b970ca9739b73a5cebb5b135a1d891181dc1"} Feb 24 14:56:39 crc kubenswrapper[4982]: I0224 14:56:39.159810 4982 generic.go:334] "Generic (PLEG): container finished" podID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerID="d998e46a3167b0c55198914e82fe3dec9985423062cbbfaa70c54289ad91711b" exitCode=0 Feb 24 14:56:39 crc kubenswrapper[4982]: I0224 14:56:39.159848 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw86r" event={"ID":"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5","Type":"ContainerDied","Data":"d998e46a3167b0c55198914e82fe3dec9985423062cbbfaa70c54289ad91711b"} Feb 24 14:56:39 crc kubenswrapper[4982]: I0224 14:56:39.162286 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h5bts" event={"ID":"94c42884-373a-42f1-91f4-1949f4a8fbe8","Type":"ContainerStarted","Data":"266a012efa6a9c162447388b1f0b75901570a2f14d63e5622d332d7f9fec706a"} Feb 24 14:56:39 crc kubenswrapper[4982]: I0224 14:56:39.164487 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzc7v" event={"ID":"6d08aa9d-495f-4693-86d2-240687656356","Type":"ContainerStarted","Data":"a63ad910b53d533b8ad5bcc7247e8c71019d5c401e94fc75eab3ec5e2956e5da"} Feb 24 14:56:39 crc kubenswrapper[4982]: I0224 14:56:39.205532 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fzc7v" podStartSLOduration=2.7751712079999997 podStartE2EDuration="5.205515981s" podCreationTimestamp="2026-02-24 14:56:34 +0000 UTC" firstStartedPulling="2026-02-24 14:56:36.120438198 +0000 UTC m=+457.739496721" lastFinishedPulling="2026-02-24 14:56:38.550783001 +0000 UTC m=+460.169841494" observedRunningTime="2026-02-24 14:56:39.203203168 +0000 UTC m=+460.822261661" watchObservedRunningTime="2026-02-24 14:56:39.205515981 +0000 UTC m=+460.824574474" Feb 24 14:56:39 crc kubenswrapper[4982]: I0224 14:56:39.222358 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h5bts" podStartSLOduration=2.695389857 podStartE2EDuration="5.222337165s" podCreationTimestamp="2026-02-24 14:56:34 +0000 UTC" firstStartedPulling="2026-02-24 14:56:36.123196734 +0000 UTC m=+457.742255227" lastFinishedPulling="2026-02-24 14:56:38.650144042 +0000 UTC m=+460.269202535" observedRunningTime="2026-02-24 14:56:39.219712413 +0000 UTC m=+460.838770916" watchObservedRunningTime="2026-02-24 14:56:39.222337165 +0000 UTC m=+460.841395648" Feb 24 14:56:40 crc kubenswrapper[4982]: I0224 14:56:40.170794 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw86r" event={"ID":"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5","Type":"ContainerStarted","Data":"695027bd7579a0eea7961da57b90a6c4454a31d8cdb0c2514a6823f28b576d1d"} Feb 24 14:56:40 crc kubenswrapper[4982]: I0224 14:56:40.174596 4982 generic.go:334] "Generic (PLEG): container finished" podID="6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f" containerID="e46824afe5ffbf8cbabd198b0cc3b970ca9739b73a5cebb5b135a1d891181dc1" exitCode=0 Feb 24 14:56:40 crc kubenswrapper[4982]: I0224 14:56:40.174710 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f82l9" event={"ID":"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f","Type":"ContainerDied","Data":"e46824afe5ffbf8cbabd198b0cc3b970ca9739b73a5cebb5b135a1d891181dc1"} Feb 24 14:56:41 crc kubenswrapper[4982]: I0224 14:56:41.182164 4982 generic.go:334] "Generic (PLEG): container finished" podID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerID="695027bd7579a0eea7961da57b90a6c4454a31d8cdb0c2514a6823f28b576d1d" exitCode=0 Feb 24 14:56:41 crc kubenswrapper[4982]: I0224 14:56:41.182220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw86r" event={"ID":"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5","Type":"ContainerDied","Data":"695027bd7579a0eea7961da57b90a6c4454a31d8cdb0c2514a6823f28b576d1d"} Feb 24 14:56:42 crc kubenswrapper[4982]: I0224 14:56:42.191222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f82l9" event={"ID":"6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f","Type":"ContainerStarted","Data":"58109fb895cb241e982b4c7d1ea9119e5f8d362aa68edf0f2ef6f71fc34285f1"} Feb 24 14:56:42 crc kubenswrapper[4982]: I0224 14:56:42.193483 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw86r" event={"ID":"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5","Type":"ContainerStarted","Data":"23a6317975657e3133962b1f2469bf493f0e5e2f011997a6dcf6d0298145a527"} Feb 24 14:56:42 crc kubenswrapper[4982]: I0224 14:56:42.212656 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f82l9" podStartSLOduration=1.682182925 podStartE2EDuration="5.212635574s" podCreationTimestamp="2026-02-24 14:56:37 +0000 UTC" firstStartedPulling="2026-02-24 14:56:38.152240127 +0000 UTC m=+459.771298640" lastFinishedPulling="2026-02-24 14:56:41.682692796 +0000 UTC m=+463.301751289" observedRunningTime="2026-02-24 14:56:42.210761553 +0000 UTC m=+463.829820056" watchObservedRunningTime="2026-02-24 14:56:42.212635574 +0000 UTC m=+463.831694067" Feb 24 14:56:42 crc kubenswrapper[4982]: I0224 14:56:42.227770 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tw86r" podStartSLOduration=2.8180928400000003 podStartE2EDuration="5.227746261s" podCreationTimestamp="2026-02-24 14:56:37 +0000 UTC" firstStartedPulling="2026-02-24 14:56:39.160913191 +0000 UTC m=+460.779971684" lastFinishedPulling="2026-02-24 14:56:41.570566612 +0000 UTC m=+463.189625105" observedRunningTime="2026-02-24 14:56:42.227055772 +0000 UTC m=+463.846114265" watchObservedRunningTime="2026-02-24 14:56:42.227746261 +0000 UTC m=+463.846804764" Feb 24 14:56:45 crc kubenswrapper[4982]: I0224 14:56:45.068703 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:45 crc kubenswrapper[4982]: I0224 14:56:45.070593 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:45 crc kubenswrapper[4982]: I0224 14:56:45.131246 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:45 crc kubenswrapper[4982]: I0224 14:56:45.260346 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:45 crc kubenswrapper[4982]: I0224 14:56:45.260389 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:45 crc kubenswrapper[4982]: I0224 14:56:45.274128 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fzc7v" Feb 24 14:56:45 crc kubenswrapper[4982]: I0224 14:56:45.322978 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:46 crc kubenswrapper[4982]: I0224 14:56:46.261416 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h5bts" Feb 24 14:56:47 crc kubenswrapper[4982]: I0224 14:56:47.464748 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:47 crc kubenswrapper[4982]: I0224 14:56:47.465178 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:47 crc kubenswrapper[4982]: I0224 14:56:47.500085 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:47 crc kubenswrapper[4982]: I0224 14:56:47.668036 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:47 crc kubenswrapper[4982]: I0224 14:56:47.668087 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:47 crc kubenswrapper[4982]: I0224 14:56:47.729737 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:56:48 crc kubenswrapper[4982]: I0224 14:56:48.274599 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f82l9" Feb 24 14:56:48 crc kubenswrapper[4982]: I0224 14:56:48.277532 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.810045 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms"] Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.813166 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.816704 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.817162 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.817268 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.817381 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.817616 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.820462 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms"] Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.969205 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2z6\" (UniqueName: \"kubernetes.io/projected/609f7930-8370-4b41-b411-e91396dc85ef-kube-api-access-hx2z6\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.969299 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/609f7930-8370-4b41-b411-e91396dc85ef-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:04 crc kubenswrapper[4982]: I0224 14:57:04.969346 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/609f7930-8370-4b41-b411-e91396dc85ef-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:05 crc kubenswrapper[4982]: I0224 14:57:05.071048 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2z6\" (UniqueName: \"kubernetes.io/projected/609f7930-8370-4b41-b411-e91396dc85ef-kube-api-access-hx2z6\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:05 crc kubenswrapper[4982]: I0224 14:57:05.071114 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/609f7930-8370-4b41-b411-e91396dc85ef-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:05 crc kubenswrapper[4982]: I0224 14:57:05.071168 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/609f7930-8370-4b41-b411-e91396dc85ef-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:05 crc kubenswrapper[4982]: I0224 14:57:05.072409 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/609f7930-8370-4b41-b411-e91396dc85ef-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:05 crc kubenswrapper[4982]: I0224 14:57:05.080196 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/609f7930-8370-4b41-b411-e91396dc85ef-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:05 crc kubenswrapper[4982]: I0224 14:57:05.090395 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2z6\" (UniqueName: \"kubernetes.io/projected/609f7930-8370-4b41-b411-e91396dc85ef-kube-api-access-hx2z6\") pod \"cluster-monitoring-operator-6d5b84845-q5wms\" (UID: \"609f7930-8370-4b41-b411-e91396dc85ef\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:05 crc kubenswrapper[4982]: I0224 14:57:05.142334 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" Feb 24 14:57:06 crc kubenswrapper[4982]: I0224 14:57:05.627863 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms"] Feb 24 14:57:06 crc kubenswrapper[4982]: I0224 14:57:06.334711 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" event={"ID":"609f7930-8370-4b41-b411-e91396dc85ef","Type":"ContainerStarted","Data":"2b5cfdbc5f6d83b4c026b2736128cbf2a3db962250db1631759cbd7b2dcfacae"} Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.017567 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr"] Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.018995 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.022482 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-p7z42" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.022582 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.028326 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr"] Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.216090 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8eb0f083-b5ab-428e-9daa-a2e27f19d3bf-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7l8rr\" (UID: \"8eb0f083-b5ab-428e-9daa-a2e27f19d3bf\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.317798 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8eb0f083-b5ab-428e-9daa-a2e27f19d3bf-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7l8rr\" (UID: \"8eb0f083-b5ab-428e-9daa-a2e27f19d3bf\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.328040 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8eb0f083-b5ab-428e-9daa-a2e27f19d3bf-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-7l8rr\" (UID: \"8eb0f083-b5ab-428e-9daa-a2e27f19d3bf\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.336161 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.350466 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" event={"ID":"609f7930-8370-4b41-b411-e91396dc85ef","Type":"ContainerStarted","Data":"71bdd95298ba4990e45954d42e523489f2db91f78009511d45e7866c0440e4e3"} Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.379084 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-q5wms" podStartSLOduration=2.702815994 podStartE2EDuration="4.379063634s" podCreationTimestamp="2026-02-24 14:57:04 +0000 UTC" firstStartedPulling="2026-02-24 14:57:05.6499314 +0000 UTC m=+487.268989943" lastFinishedPulling="2026-02-24 14:57:07.32617908 +0000 UTC m=+488.945237583" observedRunningTime="2026-02-24 14:57:08.375553368 +0000 UTC m=+489.994611871" watchObservedRunningTime="2026-02-24 14:57:08.379063634 +0000 UTC m=+489.998122137" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.621445 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr"] Feb 24 14:57:08 crc kubenswrapper[4982]: W0224 14:57:08.626112 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eb0f083_b5ab_428e_9daa_a2e27f19d3bf.slice/crio-ac931fa39c218f6f244d68e42f388ddca9b91cec673fdd69e5ad398834accb5d WatchSource:0}: Error finding container ac931fa39c218f6f244d68e42f388ddca9b91cec673fdd69e5ad398834accb5d: Status 404 returned error can't find the container with id ac931fa39c218f6f244d68e42f388ddca9b91cec673fdd69e5ad398834accb5d Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.738286 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.738895 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.738966 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.740967 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f71e224afa19708cf06f60785deb400aee56bae1714124867d30c9a242dd993"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 14:57:08 crc kubenswrapper[4982]: I0224 14:57:08.741080 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://5f71e224afa19708cf06f60785deb400aee56bae1714124867d30c9a242dd993" gracePeriod=600 Feb 24 14:57:09 crc kubenswrapper[4982]: I0224 14:57:09.365050 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="5f71e224afa19708cf06f60785deb400aee56bae1714124867d30c9a242dd993" exitCode=0 Feb 24 14:57:09 crc kubenswrapper[4982]: I0224 14:57:09.365397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"5f71e224afa19708cf06f60785deb400aee56bae1714124867d30c9a242dd993"} Feb 24 14:57:09 crc kubenswrapper[4982]: I0224 14:57:09.365573 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"e5a98397f8b5beef975d846a08d561095dbc655637a46095abad7d674ae42009"} Feb 24 14:57:09 crc kubenswrapper[4982]: I0224 14:57:09.365608 4982 scope.go:117] "RemoveContainer" containerID="94f923e5e9baba3cac77c2160a892f5751f67c9336dd0c4268dd929a8df663f9" Feb 24 14:57:09 crc kubenswrapper[4982]: I0224 14:57:09.372228 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" event={"ID":"8eb0f083-b5ab-428e-9daa-a2e27f19d3bf","Type":"ContainerStarted","Data":"ac931fa39c218f6f244d68e42f388ddca9b91cec673fdd69e5ad398834accb5d"} Feb 24 14:57:10 crc kubenswrapper[4982]: I0224 14:57:10.379862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" event={"ID":"8eb0f083-b5ab-428e-9daa-a2e27f19d3bf","Type":"ContainerStarted","Data":"dfc967268360351f285e0b4790dd82719b407c51357e2053b15fff77078916a0"} Feb 24 14:57:10 crc kubenswrapper[4982]: I0224 14:57:10.380282 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" Feb 24 14:57:10 crc kubenswrapper[4982]: I0224 14:57:10.387372 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" Feb 24 14:57:10 crc kubenswrapper[4982]: I0224 14:57:10.403881 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-7l8rr" podStartSLOduration=0.962321395 podStartE2EDuration="2.40385393s" podCreationTimestamp="2026-02-24 14:57:08 +0000 UTC" firstStartedPulling="2026-02-24 14:57:08.628424413 +0000 UTC m=+490.247482906" lastFinishedPulling="2026-02-24 14:57:10.069956948 +0000 UTC m=+491.689015441" observedRunningTime="2026-02-24 14:57:10.400603849 +0000 UTC m=+492.019662382" watchObservedRunningTime="2026-02-24 14:57:10.40385393 +0000 UTC m=+492.022912433" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.153078 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-npd4c"] Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.154107 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.158945 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.159075 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-2plmj" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.159246 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.159315 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.173175 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-npd4c"] Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.272824 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.272863 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.272919 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7jcb\" (UniqueName: \"kubernetes.io/projected/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-kube-api-access-m7jcb\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.273104 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.373687 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.374034 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.374088 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7jcb\" (UniqueName: \"kubernetes.io/projected/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-kube-api-access-m7jcb\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.374108 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.374933 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.379850 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.379912 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.393671 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7jcb\" (UniqueName: \"kubernetes.io/projected/2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f-kube-api-access-m7jcb\") pod \"prometheus-operator-db54df47d-npd4c\" (UID: \"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.470436 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" Feb 24 14:57:11 crc kubenswrapper[4982]: I0224 14:57:11.672656 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-npd4c"] Feb 24 14:57:12 crc kubenswrapper[4982]: I0224 14:57:12.400604 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" event={"ID":"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f","Type":"ContainerStarted","Data":"fcf6e147c7640564001e63bf3f80f96398e801496b48ac38c3dddce593a9add0"} Feb 24 14:57:14 crc kubenswrapper[4982]: I0224 14:57:14.414899 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" event={"ID":"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f","Type":"ContainerStarted","Data":"09d5597029aa1aa008788de5be48b9a44297a8b02e60ef23f9f2f9fb3c318a36"} Feb 24 14:57:14 crc kubenswrapper[4982]: I0224 14:57:14.415638 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" event={"ID":"2bfd7b71-ce1a-4ecd-9b0d-b19264e3129f","Type":"ContainerStarted","Data":"2653454e9eaf6756332aca392fde040380eb648bde3a036fe9ed7d968f75d692"} Feb 24 14:57:14 crc kubenswrapper[4982]: I0224 14:57:14.440544 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-npd4c" podStartSLOduration=1.809737787 podStartE2EDuration="3.440473802s" podCreationTimestamp="2026-02-24 14:57:11 +0000 UTC" firstStartedPulling="2026-02-24 14:57:11.686061899 +0000 UTC m=+493.305120412" lastFinishedPulling="2026-02-24 14:57:13.316797924 +0000 UTC m=+494.935856427" observedRunningTime="2026-02-24 14:57:14.436933904 +0000 UTC m=+496.055992437" watchObservedRunningTime="2026-02-24 14:57:14.440473802 +0000 UTC m=+496.059532335" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.503283 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-9b6mp"] Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.504524 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.506125 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.506132 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-g65g4" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.509951 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.534429 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx"] Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.535695 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.537258 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.537704 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb"] Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.537903 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-rmlrc" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.539528 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.543730 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.545053 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.546964 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.548128 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.548316 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-xl7xx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.558149 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb"] Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.561151 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx"] Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650368 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vlkz\" (UniqueName: \"kubernetes.io/projected/040a67bc-b920-407e-a0a9-babf6da613c0-kube-api-access-2vlkz\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650425 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f90e8f96-798e-4751-9c5e-5652011a5505-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650443 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-root\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650463 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-tls\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650483 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/040a67bc-b920-407e-a0a9-babf6da613c0-metrics-client-ca\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650518 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/332e9f21-0090-4aa0-987e-249f1b91e6d5-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650541 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-sys\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650594 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650630 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650660 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650704 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9z5\" (UniqueName: \"kubernetes.io/projected/f90e8f96-798e-4751-9c5e-5652011a5505-kube-api-access-lp9z5\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650729 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-wtmp\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650753 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/332e9f21-0090-4aa0-987e-249f1b91e6d5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650777 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f90e8f96-798e-4751-9c5e-5652011a5505-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650798 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9slkn\" (UniqueName: \"kubernetes.io/projected/332e9f21-0090-4aa0-987e-249f1b91e6d5-kube-api-access-9slkn\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650834 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/332e9f21-0090-4aa0-987e-249f1b91e6d5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.650853 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-textfile\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.751895 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp9z5\" (UniqueName: \"kubernetes.io/projected/f90e8f96-798e-4751-9c5e-5652011a5505-kube-api-access-lp9z5\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.751936 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-wtmp\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.751958 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/332e9f21-0090-4aa0-987e-249f1b91e6d5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.751984 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f90e8f96-798e-4751-9c5e-5652011a5505-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752013 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9slkn\" (UniqueName: \"kubernetes.io/projected/332e9f21-0090-4aa0-987e-249f1b91e6d5-kube-api-access-9slkn\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752038 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/332e9f21-0090-4aa0-987e-249f1b91e6d5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752054 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-textfile\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752070 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vlkz\" (UniqueName: \"kubernetes.io/projected/040a67bc-b920-407e-a0a9-babf6da613c0-kube-api-access-2vlkz\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752094 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f90e8f96-798e-4751-9c5e-5652011a5505-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752109 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-root\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752126 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-tls\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/040a67bc-b920-407e-a0a9-babf6da613c0-metrics-client-ca\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752161 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/332e9f21-0090-4aa0-987e-249f1b91e6d5-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752213 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-wtmp\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-root\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752539 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-sys\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752572 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-textfile\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752592 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752612 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/040a67bc-b920-407e-a0a9-babf6da613c0-sys\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752617 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752690 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752720 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f90e8f96-798e-4751-9c5e-5652011a5505-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.752726 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.753138 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/332e9f21-0090-4aa0-987e-249f1b91e6d5-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.753156 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f90e8f96-798e-4751-9c5e-5652011a5505-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.753362 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.753385 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/040a67bc-b920-407e-a0a9-babf6da613c0-metrics-client-ca\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.759857 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.760155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/040a67bc-b920-407e-a0a9-babf6da613c0-node-exporter-tls\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.760185 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/332e9f21-0090-4aa0-987e-249f1b91e6d5-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.776256 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.776328 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f90e8f96-798e-4751-9c5e-5652011a5505-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.777604 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9slkn\" (UniqueName: \"kubernetes.io/projected/332e9f21-0090-4aa0-987e-249f1b91e6d5-kube-api-access-9slkn\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.777706 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vlkz\" (UniqueName: \"kubernetes.io/projected/040a67bc-b920-407e-a0a9-babf6da613c0-kube-api-access-2vlkz\") pod \"node-exporter-9b6mp\" (UID: \"040a67bc-b920-407e-a0a9-babf6da613c0\") " pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.778570 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/332e9f21-0090-4aa0-987e-249f1b91e6d5-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4xhlx\" (UID: \"332e9f21-0090-4aa0-987e-249f1b91e6d5\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.778656 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp9z5\" (UniqueName: \"kubernetes.io/projected/f90e8f96-798e-4751-9c5e-5652011a5505-kube-api-access-lp9z5\") pod \"kube-state-metrics-777cb5bd5d-s4mbb\" (UID: \"f90e8f96-798e-4751-9c5e-5652011a5505\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.817710 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-9b6mp" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.852940 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" Feb 24 14:57:16 crc kubenswrapper[4982]: I0224 14:57:16.860031 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.312909 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb"] Feb 24 14:57:17 crc kubenswrapper[4982]: W0224 14:57:17.316748 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf90e8f96_798e_4751_9c5e_5652011a5505.slice/crio-283e4bfe8a8ac13ba93ce00c1cab3e9f4c904071cdbe6a9e2ab490198e536a3b WatchSource:0}: Error finding container 283e4bfe8a8ac13ba93ce00c1cab3e9f4c904071cdbe6a9e2ab490198e536a3b: Status 404 returned error can't find the container with id 283e4bfe8a8ac13ba93ce00c1cab3e9f4c904071cdbe6a9e2ab490198e536a3b Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.351611 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx"] Feb 24 14:57:17 crc kubenswrapper[4982]: W0224 14:57:17.356797 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod332e9f21_0090_4aa0_987e_249f1b91e6d5.slice/crio-af5dd3ec9e15a72895d8ee52e4a6293f4ef1582dbd2f86386fd12db31043b0c9 WatchSource:0}: Error finding container af5dd3ec9e15a72895d8ee52e4a6293f4ef1582dbd2f86386fd12db31043b0c9: Status 404 returned error can't find the container with id af5dd3ec9e15a72895d8ee52e4a6293f4ef1582dbd2f86386fd12db31043b0c9 Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.444376 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" event={"ID":"332e9f21-0090-4aa0-987e-249f1b91e6d5","Type":"ContainerStarted","Data":"af5dd3ec9e15a72895d8ee52e4a6293f4ef1582dbd2f86386fd12db31043b0c9"} Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.454462 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" event={"ID":"f90e8f96-798e-4751-9c5e-5652011a5505","Type":"ContainerStarted","Data":"283e4bfe8a8ac13ba93ce00c1cab3e9f4c904071cdbe6a9e2ab490198e536a3b"} Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.459772 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9b6mp" event={"ID":"040a67bc-b920-407e-a0a9-babf6da613c0","Type":"ContainerStarted","Data":"ccacd540ad62e4101aaf820dc83b72fdf3ae07bb6ee68ff0b58e08cb94b884a9"} Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.625430 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.627619 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.629825 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.629904 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.630070 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.630118 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.630214 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-lrspm" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.630324 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.630229 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.633519 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.641245 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.644465 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.809582 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.809649 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.809685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4a95bd11-00cf-43f7-be50-fad75683bcd1-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.809710 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a95bd11-00cf-43f7-be50-fad75683bcd1-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.810189 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a95bd11-00cf-43f7-be50-fad75683bcd1-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.810226 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kf9\" (UniqueName: \"kubernetes.io/projected/4a95bd11-00cf-43f7-be50-fad75683bcd1-kube-api-access-67kf9\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.810249 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-web-config\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.810310 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-config-volume\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.810368 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a95bd11-00cf-43f7-be50-fad75683bcd1-config-out\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.810493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.810571 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.810602 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a95bd11-00cf-43f7-be50-fad75683bcd1-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.911404 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.911753 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.912606 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a95bd11-00cf-43f7-be50-fad75683bcd1-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.912694 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.912768 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.912833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4a95bd11-00cf-43f7-be50-fad75683bcd1-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.912893 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a95bd11-00cf-43f7-be50-fad75683bcd1-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.912968 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a95bd11-00cf-43f7-be50-fad75683bcd1-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.913087 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67kf9\" (UniqueName: \"kubernetes.io/projected/4a95bd11-00cf-43f7-be50-fad75683bcd1-kube-api-access-67kf9\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.913155 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-web-config\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.913220 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-config-volume\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.913289 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a95bd11-00cf-43f7-be50-fad75683bcd1-config-out\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.914916 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a95bd11-00cf-43f7-be50-fad75683bcd1-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.916147 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4a95bd11-00cf-43f7-be50-fad75683bcd1-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: E0224 14:57:17.916323 4982 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 24 14:57:17 crc kubenswrapper[4982]: E0224 14:57:17.916430 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-main-tls podName:4a95bd11-00cf-43f7-be50-fad75683bcd1 nodeName:}" failed. No retries permitted until 2026-02-24 14:57:18.416403668 +0000 UTC m=+500.035462171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "4a95bd11-00cf-43f7-be50-fad75683bcd1") : secret "alertmanager-main-tls" not found Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.917665 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a95bd11-00cf-43f7-be50-fad75683bcd1-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.922427 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-config-volume\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.923043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-web-config\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.923170 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a95bd11-00cf-43f7-be50-fad75683bcd1-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.924481 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a95bd11-00cf-43f7-be50-fad75683bcd1-config-out\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.924821 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.925176 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.929802 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:17 crc kubenswrapper[4982]: I0224 14:57:17.932185 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67kf9\" (UniqueName: \"kubernetes.io/projected/4a95bd11-00cf-43f7-be50-fad75683bcd1-kube-api-access-67kf9\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.419816 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.429805 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4a95bd11-00cf-43f7-be50-fad75683bcd1-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4a95bd11-00cf-43f7-be50-fad75683bcd1\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.469663 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" event={"ID":"332e9f21-0090-4aa0-987e-249f1b91e6d5","Type":"ContainerStarted","Data":"bc3e8475ff4db30df3f21c5b2be9064e07cbd7822522276284bf22385ef921b5"} Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.469738 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" event={"ID":"332e9f21-0090-4aa0-987e-249f1b91e6d5","Type":"ContainerStarted","Data":"0b850208eb1206b45e5de874cce87e47ea2f06e0551a0935d16c193b724aa094"} Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.470731 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9b6mp" event={"ID":"040a67bc-b920-407e-a0a9-babf6da613c0","Type":"ContainerStarted","Data":"3b856e04bb27822619022fb9bbeb6441463d577172f43599bbc7047883b04958"} Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.541273 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.615988 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf"] Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.618060 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.620852 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.621150 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-9md7f" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.620862 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.626833 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.627000 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-f4gca00355qj" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.627254 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.627367 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.628893 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-metrics-client-ca\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.629000 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-tls\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.629093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.629131 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.629193 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-grpc-tls\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.629358 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.629565 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8gs\" (UniqueName: \"kubernetes.io/projected/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-kube-api-access-qw8gs\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.629599 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.637580 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf"] Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.730537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.730668 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.731007 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-grpc-tls\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.731087 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.731147 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.731169 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8gs\" (UniqueName: \"kubernetes.io/projected/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-kube-api-access-qw8gs\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.731204 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-metrics-client-ca\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.731229 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-tls\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.733472 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-metrics-client-ca\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.734670 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.735560 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.736779 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-grpc-tls\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.736798 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-tls\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.736867 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.737994 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.748581 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8gs\" (UniqueName: \"kubernetes.io/projected/cbe181cd-d85a-4e9b-87c1-90a6a7d2e426-kube-api-access-qw8gs\") pod \"thanos-querier-566ccdd4c4-gjtxf\" (UID: \"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426\") " pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:18 crc kubenswrapper[4982]: I0224 14:57:18.944306 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:19 crc kubenswrapper[4982]: I0224 14:57:19.488399 4982 generic.go:334] "Generic (PLEG): container finished" podID="040a67bc-b920-407e-a0a9-babf6da613c0" containerID="3b856e04bb27822619022fb9bbeb6441463d577172f43599bbc7047883b04958" exitCode=0 Feb 24 14:57:19 crc kubenswrapper[4982]: I0224 14:57:19.488554 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9b6mp" event={"ID":"040a67bc-b920-407e-a0a9-babf6da613c0","Type":"ContainerDied","Data":"3b856e04bb27822619022fb9bbeb6441463d577172f43599bbc7047883b04958"} Feb 24 14:57:19 crc kubenswrapper[4982]: I0224 14:57:19.788980 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf"] Feb 24 14:57:19 crc kubenswrapper[4982]: W0224 14:57:19.795200 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe181cd_d85a_4e9b_87c1_90a6a7d2e426.slice/crio-d593f38a2d9f38129694eabe1ecf2856cd519a506447dbb4f89111402f3a6f9a WatchSource:0}: Error finding container d593f38a2d9f38129694eabe1ecf2856cd519a506447dbb4f89111402f3a6f9a: Status 404 returned error can't find the container with id d593f38a2d9f38129694eabe1ecf2856cd519a506447dbb4f89111402f3a6f9a Feb 24 14:57:19 crc kubenswrapper[4982]: I0224 14:57:19.885594 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 14:57:19 crc kubenswrapper[4982]: W0224 14:57:19.887822 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a95bd11_00cf_43f7_be50_fad75683bcd1.slice/crio-c56b659af3b57c567354484c082d3963cc1c1bd142de2ab2e2fcf09e923e7bb6 WatchSource:0}: Error finding container c56b659af3b57c567354484c082d3963cc1c1bd142de2ab2e2fcf09e923e7bb6: Status 404 returned error can't find the container with id c56b659af3b57c567354484c082d3963cc1c1bd142de2ab2e2fcf09e923e7bb6 Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.503043 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9b6mp" event={"ID":"040a67bc-b920-407e-a0a9-babf6da613c0","Type":"ContainerStarted","Data":"b0bcc17b2be4f4d810c77f0173930a1974ff732fcaa3992739acd9df5a733386"} Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.503397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9b6mp" event={"ID":"040a67bc-b920-407e-a0a9-babf6da613c0","Type":"ContainerStarted","Data":"d266b2d393d640168a7a2e099b64bddd3576608429ceaafd308057cb41e6581c"} Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.504988 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4a95bd11-00cf-43f7-be50-fad75683bcd1","Type":"ContainerStarted","Data":"c56b659af3b57c567354484c082d3963cc1c1bd142de2ab2e2fcf09e923e7bb6"} Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.508808 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" event={"ID":"332e9f21-0090-4aa0-987e-249f1b91e6d5","Type":"ContainerStarted","Data":"33892ec502724988f05f2146fd10187119160b6d674395997a9787a3e52b2120"} Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.511112 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" event={"ID":"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426","Type":"ContainerStarted","Data":"d593f38a2d9f38129694eabe1ecf2856cd519a506447dbb4f89111402f3a6f9a"} Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.514530 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" event={"ID":"f90e8f96-798e-4751-9c5e-5652011a5505","Type":"ContainerStarted","Data":"89939dc287156e6b6a11c94962f2371d282d1891315e7f05185d5f00d27bf9e4"} Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.514577 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" event={"ID":"f90e8f96-798e-4751-9c5e-5652011a5505","Type":"ContainerStarted","Data":"7045a62ea5e9b275527e69e1356619f33346e42ec8fbd0241a1061f3fe4a2663"} Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.514597 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" event={"ID":"f90e8f96-798e-4751-9c5e-5652011a5505","Type":"ContainerStarted","Data":"d8fc1d5617f580e4638f762d713e83f7237524e72626c1e440e76c55f7e7d469"} Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.567697 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-9b6mp" podStartSLOduration=3.227212217 podStartE2EDuration="4.567679765s" podCreationTimestamp="2026-02-24 14:57:16 +0000 UTC" firstStartedPulling="2026-02-24 14:57:16.837136526 +0000 UTC m=+498.456195019" lastFinishedPulling="2026-02-24 14:57:18.177604084 +0000 UTC m=+499.796662567" observedRunningTime="2026-02-24 14:57:20.536045122 +0000 UTC m=+502.155103625" watchObservedRunningTime="2026-02-24 14:57:20.567679765 +0000 UTC m=+502.186738258" Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.587244 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4xhlx" podStartSLOduration=2.898726137 podStartE2EDuration="4.587227845s" podCreationTimestamp="2026-02-24 14:57:16 +0000 UTC" firstStartedPulling="2026-02-24 14:57:17.617317537 +0000 UTC m=+499.236376030" lastFinishedPulling="2026-02-24 14:57:19.305819245 +0000 UTC m=+500.924877738" observedRunningTime="2026-02-24 14:57:20.56567882 +0000 UTC m=+502.184737313" watchObservedRunningTime="2026-02-24 14:57:20.587227845 +0000 UTC m=+502.206286338" Feb 24 14:57:20 crc kubenswrapper[4982]: I0224 14:57:20.592881 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s4mbb" podStartSLOduration=2.606777584 podStartE2EDuration="4.5928672s" podCreationTimestamp="2026-02-24 14:57:16 +0000 UTC" firstStartedPulling="2026-02-24 14:57:17.319727689 +0000 UTC m=+498.938786192" lastFinishedPulling="2026-02-24 14:57:19.305817315 +0000 UTC m=+500.924875808" observedRunningTime="2026-02-24 14:57:20.586892165 +0000 UTC m=+502.205950658" watchObservedRunningTime="2026-02-24 14:57:20.5928672 +0000 UTC m=+502.211925693" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.318126 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8d6fdd984-xzt8z"] Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.319936 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.330392 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8d6fdd984-xzt8z"] Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.481992 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-service-ca\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.482039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-serving-cert\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.482214 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5rn\" (UniqueName: \"kubernetes.io/projected/74f9f8aa-815b-427b-9695-6e272665ebc8-kube-api-access-pr5rn\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.482292 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-console-config\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.482412 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-oauth-config\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.482511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-trusted-ca-bundle\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.482547 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-oauth-serving-cert\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.523852 4982 generic.go:334] "Generic (PLEG): container finished" podID="4a95bd11-00cf-43f7-be50-fad75683bcd1" containerID="8032787aef1db6bc9639b7fe9e95f8cf0318b8bb9f62a576886e960024949d04" exitCode=0 Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.523955 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4a95bd11-00cf-43f7-be50-fad75683bcd1","Type":"ContainerDied","Data":"8032787aef1db6bc9639b7fe9e95f8cf0318b8bb9f62a576886e960024949d04"} Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.583905 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-oauth-config\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.583983 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-trusted-ca-bundle\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.584008 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-oauth-serving-cert\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.585298 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-oauth-serving-cert\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.585437 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-trusted-ca-bundle\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.585468 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-service-ca\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.586081 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-serving-cert\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.586221 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5rn\" (UniqueName: \"kubernetes.io/projected/74f9f8aa-815b-427b-9695-6e272665ebc8-kube-api-access-pr5rn\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.586280 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-console-config\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.586885 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-service-ca\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.589026 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-console-config\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.589848 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-oauth-config\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.601781 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5rn\" (UniqueName: \"kubernetes.io/projected/74f9f8aa-815b-427b-9695-6e272665ebc8-kube-api-access-pr5rn\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.606646 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-serving-cert\") pod \"console-8d6fdd984-xzt8z\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.654869 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.831541 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-68547b4b7f-5mms7"] Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.832349 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.834981 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.835244 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.835405 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.835877 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-as3esfv6bb5us" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.836065 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-rd7cf" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.837038 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.837543 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68547b4b7f-5mms7"] Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.889833 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.890098 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-client-ca-bundle\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.890189 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-secret-metrics-client-certs\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.890280 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-metrics-server-audit-profiles\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.890366 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bffqr\" (UniqueName: \"kubernetes.io/projected/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-kube-api-access-bffqr\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.890459 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-secret-metrics-server-tls\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.890600 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-audit-log\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.991228 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-client-ca-bundle\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.991590 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-secret-metrics-client-certs\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.991613 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-metrics-server-audit-profiles\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.992057 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bffqr\" (UniqueName: \"kubernetes.io/projected/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-kube-api-access-bffqr\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.992147 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-secret-metrics-server-tls\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.992251 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-audit-log\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.992376 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.992682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-audit-log\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.993464 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-metrics-server-audit-profiles\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.995275 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-secret-metrics-client-certs\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.995542 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:21 crc kubenswrapper[4982]: I0224 14:57:21.997192 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-client-ca-bundle\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.000249 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-secret-metrics-server-tls\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.009308 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bffqr\" (UniqueName: \"kubernetes.io/projected/8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6-kube-api-access-bffqr\") pod \"metrics-server-68547b4b7f-5mms7\" (UID: \"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6\") " pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.169664 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.301825 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m"] Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.303680 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.308520 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.308951 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.318225 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m"] Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.407812 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/596b3e80-f1b9-44bd-856a-27e1abf877fa-monitoring-plugin-cert\") pod \"monitoring-plugin-6df9cc56-jhb9m\" (UID: \"596b3e80-f1b9-44bd-856a-27e1abf877fa\") " pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.509861 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/596b3e80-f1b9-44bd-856a-27e1abf877fa-monitoring-plugin-cert\") pod \"monitoring-plugin-6df9cc56-jhb9m\" (UID: \"596b3e80-f1b9-44bd-856a-27e1abf877fa\") " pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.526149 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/596b3e80-f1b9-44bd-856a-27e1abf877fa-monitoring-plugin-cert\") pod \"monitoring-plugin-6df9cc56-jhb9m\" (UID: \"596b3e80-f1b9-44bd-856a-27e1abf877fa\") " pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.539465 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" event={"ID":"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426","Type":"ContainerStarted","Data":"ce5a68129f84ec7aa984cf300a37d1888bcba99c41e7682f90b67883a7699c66"} Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.539574 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" event={"ID":"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426","Type":"ContainerStarted","Data":"8974b1f01500e9a4ed916b671a28942d5f0a77495856f6602c2b792395b9be5a"} Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.573162 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8d6fdd984-xzt8z"] Feb 24 14:57:22 crc kubenswrapper[4982]: W0224 14:57:22.605349 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74f9f8aa_815b_427b_9695_6e272665ebc8.slice/crio-392cc27f274a77be7ebf883e627448b25f90df80e9f9a8f09fb181ab02aaf4ba WatchSource:0}: Error finding container 392cc27f274a77be7ebf883e627448b25f90df80e9f9a8f09fb181ab02aaf4ba: Status 404 returned error can't find the container with id 392cc27f274a77be7ebf883e627448b25f90df80e9f9a8f09fb181ab02aaf4ba Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.651184 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.656422 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-68547b4b7f-5mms7"] Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.873305 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m"] Feb 24 14:57:22 crc kubenswrapper[4982]: W0224 14:57:22.880421 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod596b3e80_f1b9_44bd_856a_27e1abf877fa.slice/crio-dc73f232e062cb386c6057cfe4b3e447d636368ab5f7df19c0790bf8b022ea5a WatchSource:0}: Error finding container dc73f232e062cb386c6057cfe4b3e447d636368ab5f7df19c0790bf8b022ea5a: Status 404 returned error can't find the container with id dc73f232e062cb386c6057cfe4b3e447d636368ab5f7df19c0790bf8b022ea5a Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.897272 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.899441 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.903512 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.903834 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.904045 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.904101 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.906827 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.906988 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.907072 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.907467 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-2wb7g" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.907287 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.907780 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6f5e3vpgkr5nc" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.908248 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.909986 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.914034 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.918962 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pktjp\" (UniqueName: \"kubernetes.io/projected/9c10d4f3-f509-485b-92e1-d68c997c05e2-kube-api-access-pktjp\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.918995 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919015 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919040 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-config\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919059 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919077 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919109 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919133 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919158 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919179 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919194 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-web-config\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919216 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c10d4f3-f509-485b-92e1-d68c997c05e2-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919235 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919258 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919276 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919300 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c10d4f3-f509-485b-92e1-d68c997c05e2-config-out\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919323 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.919340 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:22 crc kubenswrapper[4982]: I0224 14:57:22.924950 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.022427 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.022809 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.022854 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.022887 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c10d4f3-f509-485b-92e1-d68c997c05e2-config-out\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.022927 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.022947 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023005 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pktjp\" (UniqueName: \"kubernetes.io/projected/9c10d4f3-f509-485b-92e1-d68c997c05e2-kube-api-access-pktjp\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023026 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023046 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023080 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-config\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023099 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023119 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023255 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023280 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023305 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023338 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-web-config\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.023382 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c10d4f3-f509-485b-92e1-d68c997c05e2-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.024337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.025072 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.027271 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.027746 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.028576 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.029276 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c10d4f3-f509-485b-92e1-d68c997c05e2-config-out\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.030210 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.030214 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-config\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.031401 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.031613 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c10d4f3-f509-485b-92e1-d68c997c05e2-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.031871 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.031947 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.034816 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.036620 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.037078 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-web-config\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.038257 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9c10d4f3-f509-485b-92e1-d68c997c05e2-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.039098 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c10d4f3-f509-485b-92e1-d68c997c05e2-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.043150 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pktjp\" (UniqueName: \"kubernetes.io/projected/9c10d4f3-f509-485b-92e1-d68c997c05e2-kube-api-access-pktjp\") pod \"prometheus-k8s-0\" (UID: \"9c10d4f3-f509-485b-92e1-d68c997c05e2\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.218027 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.549303 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" event={"ID":"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6","Type":"ContainerStarted","Data":"45bffa4de6e2bd5d955acaf1a92bef47405e3af05cb29b09b8e13763e81ebe47"} Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.550849 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8d6fdd984-xzt8z" event={"ID":"74f9f8aa-815b-427b-9695-6e272665ebc8","Type":"ContainerStarted","Data":"ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5"} Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.550887 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8d6fdd984-xzt8z" event={"ID":"74f9f8aa-815b-427b-9695-6e272665ebc8","Type":"ContainerStarted","Data":"392cc27f274a77be7ebf883e627448b25f90df80e9f9a8f09fb181ab02aaf4ba"} Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.554206 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" event={"ID":"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426","Type":"ContainerStarted","Data":"74a09dd96fba44ec2d58832e441ced52b55f2d095aad9aba3c1810512a75cd64"} Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.555426 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" event={"ID":"596b3e80-f1b9-44bd-856a-27e1abf877fa","Type":"ContainerStarted","Data":"dc73f232e062cb386c6057cfe4b3e447d636368ab5f7df19c0790bf8b022ea5a"} Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.571722 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8d6fdd984-xzt8z" podStartSLOduration=2.571701012 podStartE2EDuration="2.571701012s" podCreationTimestamp="2026-02-24 14:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:57:23.570853489 +0000 UTC m=+505.189911982" watchObservedRunningTime="2026-02-24 14:57:23.571701012 +0000 UTC m=+505.190759505" Feb 24 14:57:23 crc kubenswrapper[4982]: I0224 14:57:23.636371 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 14:57:24 crc kubenswrapper[4982]: W0224 14:57:24.857657 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c10d4f3_f509_485b_92e1_d68c997c05e2.slice/crio-61ad1b1611c86328df02a2ecea7b470040c81481889a3bd1f593587dfbb6f97e WatchSource:0}: Error finding container 61ad1b1611c86328df02a2ecea7b470040c81481889a3bd1f593587dfbb6f97e: Status 404 returned error can't find the container with id 61ad1b1611c86328df02a2ecea7b470040c81481889a3bd1f593587dfbb6f97e Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.578563 4982 generic.go:334] "Generic (PLEG): container finished" podID="9c10d4f3-f509-485b-92e1-d68c997c05e2" containerID="960333e401f69fb798924dfd6db2301fcfff83ae7ba7d8425d51ceb4d10e994d" exitCode=0 Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.579088 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9c10d4f3-f509-485b-92e1-d68c997c05e2","Type":"ContainerDied","Data":"960333e401f69fb798924dfd6db2301fcfff83ae7ba7d8425d51ceb4d10e994d"} Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.579134 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9c10d4f3-f509-485b-92e1-d68c997c05e2","Type":"ContainerStarted","Data":"61ad1b1611c86328df02a2ecea7b470040c81481889a3bd1f593587dfbb6f97e"} Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.583174 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" event={"ID":"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426","Type":"ContainerStarted","Data":"955d29f1cb0dc01a58b11c04d50365898fe1b3cd6c217135fa0fc38cc7c0abef"} Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.588480 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" event={"ID":"596b3e80-f1b9-44bd-856a-27e1abf877fa","Type":"ContainerStarted","Data":"334d999d415eec4860e8261388736f6b0bac083b6e900bdaf621f37b0d796fb3"} Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.589236 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.593366 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" event={"ID":"8fedee08-a43f-4bcd-865f-7d3ad4a0a1a6","Type":"ContainerStarted","Data":"abc0458a636f85d19de744a6017ffb6d8a1043c04c85fffe8324ef23d5262d27"} Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.598083 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4a95bd11-00cf-43f7-be50-fad75683bcd1","Type":"ContainerStarted","Data":"d6227cd793241d9d6e9e96e63c1764922b99559ff0d3c65962c0b91cb043676e"} Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.600628 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.637482 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6df9cc56-jhb9m" podStartSLOduration=1.343139059 podStartE2EDuration="3.637442987s" podCreationTimestamp="2026-02-24 14:57:22 +0000 UTC" firstStartedPulling="2026-02-24 14:57:22.883484088 +0000 UTC m=+504.502542581" lastFinishedPulling="2026-02-24 14:57:25.177787976 +0000 UTC m=+506.796846509" observedRunningTime="2026-02-24 14:57:25.633198759 +0000 UTC m=+507.252257262" watchObservedRunningTime="2026-02-24 14:57:25.637442987 +0000 UTC m=+507.256501520" Feb 24 14:57:25 crc kubenswrapper[4982]: I0224 14:57:25.666375 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" podStartSLOduration=2.159811089 podStartE2EDuration="4.666345803s" podCreationTimestamp="2026-02-24 14:57:21 +0000 UTC" firstStartedPulling="2026-02-24 14:57:22.667840669 +0000 UTC m=+504.286899162" lastFinishedPulling="2026-02-24 14:57:25.174375383 +0000 UTC m=+506.793433876" observedRunningTime="2026-02-24 14:57:25.662720084 +0000 UTC m=+507.281778607" watchObservedRunningTime="2026-02-24 14:57:25.666345803 +0000 UTC m=+507.285404306" Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.612735 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4a95bd11-00cf-43f7-be50-fad75683bcd1","Type":"ContainerStarted","Data":"37355f53036d36b97ba73efa4c213ba05f6d8395ec14dd760b1e33aa808216c9"} Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.613285 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4a95bd11-00cf-43f7-be50-fad75683bcd1","Type":"ContainerStarted","Data":"89adbbcb18f9a6ba5e568c4fa779dd824963a30829e6f53d0f636143ec818781"} Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.613329 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4a95bd11-00cf-43f7-be50-fad75683bcd1","Type":"ContainerStarted","Data":"4b2a2e27e9ee03afbfa236d41cc8aea0e604b78536c953511bf5a08a6eac26bb"} Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.613341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4a95bd11-00cf-43f7-be50-fad75683bcd1","Type":"ContainerStarted","Data":"89052133f735a9accb6830a294faa9ba781832da5c498e8be7b44652f2d4ff7c"} Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.613352 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4a95bd11-00cf-43f7-be50-fad75683bcd1","Type":"ContainerStarted","Data":"a9fc6519283449cd6a53cdc27a07db4e8d5fc5c5d3ac7244f06fef563dbb3a26"} Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.618899 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" event={"ID":"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426","Type":"ContainerStarted","Data":"adcb46588827f92dd8d81d7682756c4889ae12e7e34ef86e5beddae43f65f786"} Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.618963 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" event={"ID":"cbe181cd-d85a-4e9b-87c1-90a6a7d2e426","Type":"ContainerStarted","Data":"2ff8e8934287361349234ebd78a42ac4f85d7eed2773476dee76ebd7ecd3115b"} Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.657578 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.367413215 podStartE2EDuration="9.657547736s" podCreationTimestamp="2026-02-24 14:57:17 +0000 UTC" firstStartedPulling="2026-02-24 14:57:19.88997355 +0000 UTC m=+501.509032043" lastFinishedPulling="2026-02-24 14:57:25.180108081 +0000 UTC m=+506.799166564" observedRunningTime="2026-02-24 14:57:26.64829003 +0000 UTC m=+508.267348533" watchObservedRunningTime="2026-02-24 14:57:26.657547736 +0000 UTC m=+508.276606229" Feb 24 14:57:26 crc kubenswrapper[4982]: I0224 14:57:26.685428 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" podStartSLOduration=3.30378176 podStartE2EDuration="8.685407545s" podCreationTimestamp="2026-02-24 14:57:18 +0000 UTC" firstStartedPulling="2026-02-24 14:57:19.79717678 +0000 UTC m=+501.416235283" lastFinishedPulling="2026-02-24 14:57:25.178802555 +0000 UTC m=+506.797861068" observedRunningTime="2026-02-24 14:57:26.676368676 +0000 UTC m=+508.295427169" watchObservedRunningTime="2026-02-24 14:57:26.685407545 +0000 UTC m=+508.304466038" Feb 24 14:57:27 crc kubenswrapper[4982]: I0224 14:57:27.628822 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:27 crc kubenswrapper[4982]: I0224 14:57:27.639112 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-566ccdd4c4-gjtxf" Feb 24 14:57:29 crc kubenswrapper[4982]: I0224 14:57:29.654676 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9c10d4f3-f509-485b-92e1-d68c997c05e2","Type":"ContainerStarted","Data":"112508f4d6cdbc26101f5c4c775dbeaf197f377680d2e75489ce18d3ac0dcdea"} Feb 24 14:57:30 crc kubenswrapper[4982]: I0224 14:57:30.666276 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9c10d4f3-f509-485b-92e1-d68c997c05e2","Type":"ContainerStarted","Data":"417239aba039b68b74aa3e19b364539711104acdb97e23aba0ac6e5cd711d54b"} Feb 24 14:57:30 crc kubenswrapper[4982]: I0224 14:57:30.666328 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9c10d4f3-f509-485b-92e1-d68c997c05e2","Type":"ContainerStarted","Data":"a6af6c47199919a9293a310fbfc8e7d5422e90dccb9230a4b081f59edf571ec6"} Feb 24 14:57:30 crc kubenswrapper[4982]: I0224 14:57:30.666343 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9c10d4f3-f509-485b-92e1-d68c997c05e2","Type":"ContainerStarted","Data":"483ca118535e5ae9e4d2df61750929704f6fbb65e0d7ea7e23ab5389b336cb40"} Feb 24 14:57:30 crc kubenswrapper[4982]: I0224 14:57:30.666354 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9c10d4f3-f509-485b-92e1-d68c997c05e2","Type":"ContainerStarted","Data":"d05f3a6bad944895bd745b7b5beff995a4628780e799b1e6bff1fe41d37fce31"} Feb 24 14:57:30 crc kubenswrapper[4982]: I0224 14:57:30.666366 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"9c10d4f3-f509-485b-92e1-d68c997c05e2","Type":"ContainerStarted","Data":"b3c9e95c2df8f5476dc7d0a4b3ea1f210335bae041c89c84903b849abdd4b2f3"} Feb 24 14:57:30 crc kubenswrapper[4982]: I0224 14:57:30.723885 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.863680062 podStartE2EDuration="8.723839926s" podCreationTimestamp="2026-02-24 14:57:22 +0000 UTC" firstStartedPulling="2026-02-24 14:57:25.579854257 +0000 UTC m=+507.198912750" lastFinishedPulling="2026-02-24 14:57:29.440014111 +0000 UTC m=+511.059072614" observedRunningTime="2026-02-24 14:57:30.704304987 +0000 UTC m=+512.323363550" watchObservedRunningTime="2026-02-24 14:57:30.723839926 +0000 UTC m=+512.342898459" Feb 24 14:57:31 crc kubenswrapper[4982]: I0224 14:57:31.655708 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:31 crc kubenswrapper[4982]: I0224 14:57:31.656106 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:31 crc kubenswrapper[4982]: I0224 14:57:31.664371 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:31 crc kubenswrapper[4982]: I0224 14:57:31.679875 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:57:31 crc kubenswrapper[4982]: I0224 14:57:31.773046 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nsv6c"] Feb 24 14:57:33 crc kubenswrapper[4982]: I0224 14:57:33.219362 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:57:42 crc kubenswrapper[4982]: I0224 14:57:42.170970 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:42 crc kubenswrapper[4982]: I0224 14:57:42.171651 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:57:56 crc kubenswrapper[4982]: I0224 14:57:56.859315 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-nsv6c" podUID="dfdbf1b1-2b07-4bff-ab32-759a436b4a78" containerName="console" containerID="cri-o://1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd" gracePeriod=15 Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.290442 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nsv6c_dfdbf1b1-2b07-4bff-ab32-759a436b4a78/console/0.log" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.290919 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.414065 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdbbx\" (UniqueName: \"kubernetes.io/projected/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-kube-api-access-vdbbx\") pod \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.414117 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-oauth-serving-cert\") pod \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.414167 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-config\") pod \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.414226 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-service-ca\") pod \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.414250 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-oauth-config\") pod \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.414307 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-trusted-ca-bundle\") pod \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.414336 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-serving-cert\") pod \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\" (UID: \"dfdbf1b1-2b07-4bff-ab32-759a436b4a78\") " Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.416825 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-service-ca" (OuterVolumeSpecName: "service-ca") pod "dfdbf1b1-2b07-4bff-ab32-759a436b4a78" (UID: "dfdbf1b1-2b07-4bff-ab32-759a436b4a78"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.417307 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dfdbf1b1-2b07-4bff-ab32-759a436b4a78" (UID: "dfdbf1b1-2b07-4bff-ab32-759a436b4a78"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.417881 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-config" (OuterVolumeSpecName: "console-config") pod "dfdbf1b1-2b07-4bff-ab32-759a436b4a78" (UID: "dfdbf1b1-2b07-4bff-ab32-759a436b4a78"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.418959 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dfdbf1b1-2b07-4bff-ab32-759a436b4a78" (UID: "dfdbf1b1-2b07-4bff-ab32-759a436b4a78"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.422072 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dfdbf1b1-2b07-4bff-ab32-759a436b4a78" (UID: "dfdbf1b1-2b07-4bff-ab32-759a436b4a78"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.422686 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dfdbf1b1-2b07-4bff-ab32-759a436b4a78" (UID: "dfdbf1b1-2b07-4bff-ab32-759a436b4a78"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.429829 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-kube-api-access-vdbbx" (OuterVolumeSpecName: "kube-api-access-vdbbx") pod "dfdbf1b1-2b07-4bff-ab32-759a436b4a78" (UID: "dfdbf1b1-2b07-4bff-ab32-759a436b4a78"). InnerVolumeSpecName "kube-api-access-vdbbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.516854 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.517106 4982 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.517125 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdbbx\" (UniqueName: \"kubernetes.io/projected/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-kube-api-access-vdbbx\") on node \"crc\" DevicePath \"\"" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.517144 4982 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.517159 4982 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.517176 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.517191 4982 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dfdbf1b1-2b07-4bff-ab32-759a436b4a78-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.904629 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nsv6c_dfdbf1b1-2b07-4bff-ab32-759a436b4a78/console/0.log" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.905003 4982 generic.go:334] "Generic (PLEG): container finished" podID="dfdbf1b1-2b07-4bff-ab32-759a436b4a78" containerID="1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd" exitCode=2 Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.905047 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nsv6c" event={"ID":"dfdbf1b1-2b07-4bff-ab32-759a436b4a78","Type":"ContainerDied","Data":"1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd"} Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.905086 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nsv6c" event={"ID":"dfdbf1b1-2b07-4bff-ab32-759a436b4a78","Type":"ContainerDied","Data":"2cb9dcd039a072d86f46a706e506f719f4e0247856d99b3ae3254c5c95e2c854"} Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.905120 4982 scope.go:117] "RemoveContainer" containerID="1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.905311 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nsv6c" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.943954 4982 scope.go:117] "RemoveContainer" containerID="1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd" Feb 24 14:57:57 crc kubenswrapper[4982]: E0224 14:57:57.944782 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd\": container with ID starting with 1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd not found: ID does not exist" containerID="1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.944847 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd"} err="failed to get container status \"1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd\": rpc error: code = NotFound desc = could not find container \"1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd\": container with ID starting with 1d73e069698d26c45d830a483977a98f7786cb72f74beda4090597335c0ea7fd not found: ID does not exist" Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.947059 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nsv6c"] Feb 24 14:57:57 crc kubenswrapper[4982]: I0224 14:57:57.950740 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-nsv6c"] Feb 24 14:57:59 crc kubenswrapper[4982]: I0224 14:57:59.154263 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfdbf1b1-2b07-4bff-ab32-759a436b4a78" path="/var/lib/kubelet/pods/dfdbf1b1-2b07-4bff-ab32-759a436b4a78/volumes" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.151939 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532418-cbbzs"] Feb 24 14:58:00 crc kubenswrapper[4982]: E0224 14:58:00.153009 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdbf1b1-2b07-4bff-ab32-759a436b4a78" containerName="console" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.153050 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdbf1b1-2b07-4bff-ab32-759a436b4a78" containerName="console" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.153305 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdbf1b1-2b07-4bff-ab32-759a436b4a78" containerName="console" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.154266 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.156149 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532418-cbbzs"] Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.156609 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.157416 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.158833 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.259217 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7rv\" (UniqueName: \"kubernetes.io/projected/9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9-kube-api-access-7n7rv\") pod \"auto-csr-approver-29532418-cbbzs\" (UID: \"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9\") " pod="openshift-infra/auto-csr-approver-29532418-cbbzs" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.361766 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7rv\" (UniqueName: \"kubernetes.io/projected/9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9-kube-api-access-7n7rv\") pod \"auto-csr-approver-29532418-cbbzs\" (UID: \"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9\") " pod="openshift-infra/auto-csr-approver-29532418-cbbzs" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.388572 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7rv\" (UniqueName: \"kubernetes.io/projected/9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9-kube-api-access-7n7rv\") pod \"auto-csr-approver-29532418-cbbzs\" (UID: \"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9\") " pod="openshift-infra/auto-csr-approver-29532418-cbbzs" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.478087 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.762481 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532418-cbbzs"] Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.778724 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 14:58:00 crc kubenswrapper[4982]: I0224 14:58:00.957459 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" event={"ID":"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9","Type":"ContainerStarted","Data":"0b7d985d6e43601ba92f782ffaa8086d3accaea30b49c4596aaf7596e8826c0c"} Feb 24 14:58:01 crc kubenswrapper[4982]: I0224 14:58:01.967862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" event={"ID":"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9","Type":"ContainerStarted","Data":"ef076d30211a7b15f92bbf597b640a94664865225364d019d17061fa07d3fae9"} Feb 24 14:58:01 crc kubenswrapper[4982]: I0224 14:58:01.990807 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" podStartSLOduration=1.190603154 podStartE2EDuration="1.990789588s" podCreationTimestamp="2026-02-24 14:58:00 +0000 UTC" firstStartedPulling="2026-02-24 14:58:00.77824078 +0000 UTC m=+542.397299303" lastFinishedPulling="2026-02-24 14:58:01.578427204 +0000 UTC m=+543.197485737" observedRunningTime="2026-02-24 14:58:01.984736772 +0000 UTC m=+543.603795265" watchObservedRunningTime="2026-02-24 14:58:01.990789588 +0000 UTC m=+543.609848081" Feb 24 14:58:02 crc kubenswrapper[4982]: I0224 14:58:02.182345 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:58:02 crc kubenswrapper[4982]: I0224 14:58:02.190121 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-68547b4b7f-5mms7" Feb 24 14:58:02 crc kubenswrapper[4982]: I0224 14:58:02.976334 4982 generic.go:334] "Generic (PLEG): container finished" podID="9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9" containerID="ef076d30211a7b15f92bbf597b640a94664865225364d019d17061fa07d3fae9" exitCode=0 Feb 24 14:58:02 crc kubenswrapper[4982]: I0224 14:58:02.976630 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" event={"ID":"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9","Type":"ContainerDied","Data":"ef076d30211a7b15f92bbf597b640a94664865225364d019d17061fa07d3fae9"} Feb 24 14:58:04 crc kubenswrapper[4982]: I0224 14:58:04.303076 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" Feb 24 14:58:04 crc kubenswrapper[4982]: I0224 14:58:04.428314 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7rv\" (UniqueName: \"kubernetes.io/projected/9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9-kube-api-access-7n7rv\") pod \"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9\" (UID: \"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9\") " Feb 24 14:58:04 crc kubenswrapper[4982]: I0224 14:58:04.434740 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9-kube-api-access-7n7rv" (OuterVolumeSpecName: "kube-api-access-7n7rv") pod "9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9" (UID: "9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9"). InnerVolumeSpecName "kube-api-access-7n7rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:58:04 crc kubenswrapper[4982]: I0224 14:58:04.530346 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7rv\" (UniqueName: \"kubernetes.io/projected/9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9-kube-api-access-7n7rv\") on node \"crc\" DevicePath \"\"" Feb 24 14:58:04 crc kubenswrapper[4982]: I0224 14:58:04.993360 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" event={"ID":"9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9","Type":"ContainerDied","Data":"0b7d985d6e43601ba92f782ffaa8086d3accaea30b49c4596aaf7596e8826c0c"} Feb 24 14:58:04 crc kubenswrapper[4982]: I0224 14:58:04.993406 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b7d985d6e43601ba92f782ffaa8086d3accaea30b49c4596aaf7596e8826c0c" Feb 24 14:58:04 crc kubenswrapper[4982]: I0224 14:58:04.993441 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532418-cbbzs" Feb 24 14:58:05 crc kubenswrapper[4982]: I0224 14:58:05.046257 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532412-hd7v7"] Feb 24 14:58:05 crc kubenswrapper[4982]: I0224 14:58:05.050756 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532412-hd7v7"] Feb 24 14:58:05 crc kubenswrapper[4982]: I0224 14:58:05.151790 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b9348a-b44f-4ecf-9043-0948b992d64e" path="/var/lib/kubelet/pods/41b9348a-b44f-4ecf-9043-0948b992d64e/volumes" Feb 24 14:58:23 crc kubenswrapper[4982]: I0224 14:58:23.219184 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:58:23 crc kubenswrapper[4982]: I0224 14:58:23.266529 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:58:24 crc kubenswrapper[4982]: I0224 14:58:24.177821 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.169678 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c76489864-rvq9n"] Feb 24 14:58:45 crc kubenswrapper[4982]: E0224 14:58:45.170657 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9" containerName="oc" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.170679 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9" containerName="oc" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.170872 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9" containerName="oc" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.171537 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.189812 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c76489864-rvq9n"] Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.292188 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-oauth-serving-cert\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.292256 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbh8p\" (UniqueName: \"kubernetes.io/projected/e3acc9b8-3d10-46bd-9121-3c505de588b6-kube-api-access-hbh8p\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.292291 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-service-ca\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.292329 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-oauth-config\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.292384 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-trusted-ca-bundle\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.292441 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-serving-cert\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.292462 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-config\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.393643 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-serving-cert\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.393688 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-config\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.393728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-oauth-serving-cert\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.393762 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbh8p\" (UniqueName: \"kubernetes.io/projected/e3acc9b8-3d10-46bd-9121-3c505de588b6-kube-api-access-hbh8p\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.393789 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-service-ca\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.393819 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-oauth-config\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.393864 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-trusted-ca-bundle\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.395021 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-config\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.395053 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-service-ca\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.395123 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-trusted-ca-bundle\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.395126 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-oauth-serving-cert\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.399473 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-serving-cert\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.399489 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-oauth-config\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.411622 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbh8p\" (UniqueName: \"kubernetes.io/projected/e3acc9b8-3d10-46bd-9121-3c505de588b6-kube-api-access-hbh8p\") pod \"console-c76489864-rvq9n\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.530488 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:45 crc kubenswrapper[4982]: I0224 14:58:45.818419 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c76489864-rvq9n"] Feb 24 14:58:46 crc kubenswrapper[4982]: I0224 14:58:46.307403 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c76489864-rvq9n" event={"ID":"e3acc9b8-3d10-46bd-9121-3c505de588b6","Type":"ContainerStarted","Data":"ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846"} Feb 24 14:58:46 crc kubenswrapper[4982]: I0224 14:58:46.307954 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c76489864-rvq9n" event={"ID":"e3acc9b8-3d10-46bd-9121-3c505de588b6","Type":"ContainerStarted","Data":"739c80414d7ffd56967986d00e6a8d3c953a69c6ab7d67414da0932dff02b499"} Feb 24 14:58:46 crc kubenswrapper[4982]: I0224 14:58:46.345815 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c76489864-rvq9n" podStartSLOduration=1.345786811 podStartE2EDuration="1.345786811s" podCreationTimestamp="2026-02-24 14:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 14:58:46.333484202 +0000 UTC m=+587.952542735" watchObservedRunningTime="2026-02-24 14:58:46.345786811 +0000 UTC m=+587.964845344" Feb 24 14:58:55 crc kubenswrapper[4982]: I0224 14:58:55.530865 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:55 crc kubenswrapper[4982]: I0224 14:58:55.531596 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:55 crc kubenswrapper[4982]: I0224 14:58:55.537228 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:56 crc kubenswrapper[4982]: I0224 14:58:56.393680 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-c76489864-rvq9n" Feb 24 14:58:56 crc kubenswrapper[4982]: I0224 14:58:56.481124 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8d6fdd984-xzt8z"] Feb 24 14:59:19 crc kubenswrapper[4982]: I0224 14:59:19.343997 4982 scope.go:117] "RemoveContainer" containerID="3ffa466dc0c83de94f5eee93f992f8cd8b2ed2db77016ff31c430fde6502cfb1" Feb 24 14:59:19 crc kubenswrapper[4982]: I0224 14:59:19.363098 4982 scope.go:117] "RemoveContainer" containerID="52c37b234f73f64a3e962c5797162e7f29b78a0427ddd14b01c4669edfbf927a" Feb 24 14:59:19 crc kubenswrapper[4982]: I0224 14:59:19.388689 4982 scope.go:117] "RemoveContainer" containerID="ff3df357ccf88017218d789c88707d1a0b30a887da0bae389fa2f09228548882" Feb 24 14:59:19 crc kubenswrapper[4982]: I0224 14:59:19.409850 4982 scope.go:117] "RemoveContainer" containerID="2d0db04461c42819fc1d3e6b8a792dcd6c3101ed9c802d2c91072e563b401934" Feb 24 14:59:21 crc kubenswrapper[4982]: I0224 14:59:21.538246 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-8d6fdd984-xzt8z" podUID="74f9f8aa-815b-427b-9695-6e272665ebc8" containerName="console" containerID="cri-o://ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5" gracePeriod=15 Feb 24 14:59:21 crc kubenswrapper[4982]: I0224 14:59:21.656495 4982 patch_prober.go:28] interesting pod/console-8d6fdd984-xzt8z container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/health\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 24 14:59:21 crc kubenswrapper[4982]: I0224 14:59:21.656594 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-8d6fdd984-xzt8z" podUID="74f9f8aa-815b-427b-9695-6e272665ebc8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.87:8443/health\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.011061 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8d6fdd984-xzt8z_74f9f8aa-815b-427b-9695-6e272665ebc8/console/0.log" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.011384 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.138341 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-oauth-serving-cert\") pod \"74f9f8aa-815b-427b-9695-6e272665ebc8\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.138380 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-trusted-ca-bundle\") pod \"74f9f8aa-815b-427b-9695-6e272665ebc8\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.138406 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-service-ca\") pod \"74f9f8aa-815b-427b-9695-6e272665ebc8\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.138426 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-serving-cert\") pod \"74f9f8aa-815b-427b-9695-6e272665ebc8\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.138473 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr5rn\" (UniqueName: \"kubernetes.io/projected/74f9f8aa-815b-427b-9695-6e272665ebc8-kube-api-access-pr5rn\") pod \"74f9f8aa-815b-427b-9695-6e272665ebc8\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.138512 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-console-config\") pod \"74f9f8aa-815b-427b-9695-6e272665ebc8\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.138528 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-oauth-config\") pod \"74f9f8aa-815b-427b-9695-6e272665ebc8\" (UID: \"74f9f8aa-815b-427b-9695-6e272665ebc8\") " Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.139217 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "74f9f8aa-815b-427b-9695-6e272665ebc8" (UID: "74f9f8aa-815b-427b-9695-6e272665ebc8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.139263 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-console-config" (OuterVolumeSpecName: "console-config") pod "74f9f8aa-815b-427b-9695-6e272665ebc8" (UID: "74f9f8aa-815b-427b-9695-6e272665ebc8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.139405 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "74f9f8aa-815b-427b-9695-6e272665ebc8" (UID: "74f9f8aa-815b-427b-9695-6e272665ebc8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.139644 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-service-ca" (OuterVolumeSpecName: "service-ca") pod "74f9f8aa-815b-427b-9695-6e272665ebc8" (UID: "74f9f8aa-815b-427b-9695-6e272665ebc8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.144603 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "74f9f8aa-815b-427b-9695-6e272665ebc8" (UID: "74f9f8aa-815b-427b-9695-6e272665ebc8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.144690 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f9f8aa-815b-427b-9695-6e272665ebc8-kube-api-access-pr5rn" (OuterVolumeSpecName: "kube-api-access-pr5rn") pod "74f9f8aa-815b-427b-9695-6e272665ebc8" (UID: "74f9f8aa-815b-427b-9695-6e272665ebc8"). InnerVolumeSpecName "kube-api-access-pr5rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.146092 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "74f9f8aa-815b-427b-9695-6e272665ebc8" (UID: "74f9f8aa-815b-427b-9695-6e272665ebc8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.240903 4982 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.240967 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.240984 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.240994 4982 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.241004 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr5rn\" (UniqueName: \"kubernetes.io/projected/74f9f8aa-815b-427b-9695-6e272665ebc8-kube-api-access-pr5rn\") on node \"crc\" DevicePath \"\"" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.241013 4982 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74f9f8aa-815b-427b-9695-6e272665ebc8-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.241022 4982 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74f9f8aa-815b-427b-9695-6e272665ebc8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.617587 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-8d6fdd984-xzt8z_74f9f8aa-815b-427b-9695-6e272665ebc8/console/0.log" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.617704 4982 generic.go:334] "Generic (PLEG): container finished" podID="74f9f8aa-815b-427b-9695-6e272665ebc8" containerID="ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5" exitCode=2 Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.617817 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8d6fdd984-xzt8z" event={"ID":"74f9f8aa-815b-427b-9695-6e272665ebc8","Type":"ContainerDied","Data":"ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5"} Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.617827 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8d6fdd984-xzt8z" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.617871 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8d6fdd984-xzt8z" event={"ID":"74f9f8aa-815b-427b-9695-6e272665ebc8","Type":"ContainerDied","Data":"392cc27f274a77be7ebf883e627448b25f90df80e9f9a8f09fb181ab02aaf4ba"} Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.617902 4982 scope.go:117] "RemoveContainer" containerID="ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.649638 4982 scope.go:117] "RemoveContainer" containerID="ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5" Feb 24 14:59:22 crc kubenswrapper[4982]: E0224 14:59:22.651022 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5\": container with ID starting with ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5 not found: ID does not exist" containerID="ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.651194 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5"} err="failed to get container status \"ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5\": rpc error: code = NotFound desc = could not find container \"ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5\": container with ID starting with ac23401e9bcc6cef1840f63fafaeeb50d597255df2b59decb2305883826690a5 not found: ID does not exist" Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.688196 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-8d6fdd984-xzt8z"] Feb 24 14:59:22 crc kubenswrapper[4982]: I0224 14:59:22.701851 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-8d6fdd984-xzt8z"] Feb 24 14:59:23 crc kubenswrapper[4982]: I0224 14:59:23.157671 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f9f8aa-815b-427b-9695-6e272665ebc8" path="/var/lib/kubelet/pods/74f9f8aa-815b-427b-9695-6e272665ebc8/volumes" Feb 24 14:59:38 crc kubenswrapper[4982]: I0224 14:59:38.738771 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 14:59:38 crc kubenswrapper[4982]: I0224 14:59:38.739610 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.156105 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532420-xw6gk"] Feb 24 15:00:00 crc kubenswrapper[4982]: E0224 15:00:00.157972 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f9f8aa-815b-427b-9695-6e272665ebc8" containerName="console" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.158008 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f9f8aa-815b-427b-9695-6e272665ebc8" containerName="console" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.158340 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f9f8aa-815b-427b-9695-6e272665ebc8" containerName="console" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.159344 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532420-xw6gk" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.163232 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.163801 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.166747 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.171992 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf"] Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.177997 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.184995 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.186106 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.190541 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-secret-volume\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.190671 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp94x\" (UniqueName: \"kubernetes.io/projected/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-kube-api-access-qp94x\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.190938 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532420-xw6gk"] Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.191369 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-config-volume\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.215616 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf"] Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.291980 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-config-volume\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.292314 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4jnz\" (UniqueName: \"kubernetes.io/projected/d787834f-4c51-4859-baa1-ae0f6e91b1a6-kube-api-access-v4jnz\") pod \"auto-csr-approver-29532420-xw6gk\" (UID: \"d787834f-4c51-4859-baa1-ae0f6e91b1a6\") " pod="openshift-infra/auto-csr-approver-29532420-xw6gk" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.292399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-secret-volume\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.292481 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp94x\" (UniqueName: \"kubernetes.io/projected/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-kube-api-access-qp94x\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.293978 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-config-volume\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.301208 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-secret-volume\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.311478 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp94x\" (UniqueName: \"kubernetes.io/projected/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-kube-api-access-qp94x\") pod \"collect-profiles-29532420-vw9gf\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.394230 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4jnz\" (UniqueName: \"kubernetes.io/projected/d787834f-4c51-4859-baa1-ae0f6e91b1a6-kube-api-access-v4jnz\") pod \"auto-csr-approver-29532420-xw6gk\" (UID: \"d787834f-4c51-4859-baa1-ae0f6e91b1a6\") " pod="openshift-infra/auto-csr-approver-29532420-xw6gk" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.416949 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4jnz\" (UniqueName: \"kubernetes.io/projected/d787834f-4c51-4859-baa1-ae0f6e91b1a6-kube-api-access-v4jnz\") pod \"auto-csr-approver-29532420-xw6gk\" (UID: \"d787834f-4c51-4859-baa1-ae0f6e91b1a6\") " pod="openshift-infra/auto-csr-approver-29532420-xw6gk" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.515895 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532420-xw6gk" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.527205 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.808152 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf"] Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.940070 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" event={"ID":"fe339d91-5ca8-43ef-ab0a-552ebfc10fec","Type":"ContainerStarted","Data":"5bba91b626ad3219dcc2ccac55a24ac4389bad36882189f71a14e9f00a383bad"} Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.940116 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" event={"ID":"fe339d91-5ca8-43ef-ab0a-552ebfc10fec","Type":"ContainerStarted","Data":"049e5923f5d3f2f6dc30f7018a153968b8cff3a7cc8c44d82bbb6c9c791fd77f"} Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.955688 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" podStartSLOduration=0.95566694 podStartE2EDuration="955.66694ms" podCreationTimestamp="2026-02-24 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:00:00.954122027 +0000 UTC m=+662.573180520" watchObservedRunningTime="2026-02-24 15:00:00.95566694 +0000 UTC m=+662.574725443" Feb 24 15:00:00 crc kubenswrapper[4982]: I0224 15:00:00.984653 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532420-xw6gk"] Feb 24 15:00:00 crc kubenswrapper[4982]: W0224 15:00:00.989901 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd787834f_4c51_4859_baa1_ae0f6e91b1a6.slice/crio-3f5453664c22a778692e5959b3ea03d527d8feef13c28aec44bf3b43750ede02 WatchSource:0}: Error finding container 3f5453664c22a778692e5959b3ea03d527d8feef13c28aec44bf3b43750ede02: Status 404 returned error can't find the container with id 3f5453664c22a778692e5959b3ea03d527d8feef13c28aec44bf3b43750ede02 Feb 24 15:00:01 crc kubenswrapper[4982]: I0224 15:00:01.950757 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532420-xw6gk" event={"ID":"d787834f-4c51-4859-baa1-ae0f6e91b1a6","Type":"ContainerStarted","Data":"3f5453664c22a778692e5959b3ea03d527d8feef13c28aec44bf3b43750ede02"} Feb 24 15:00:01 crc kubenswrapper[4982]: I0224 15:00:01.953421 4982 generic.go:334] "Generic (PLEG): container finished" podID="fe339d91-5ca8-43ef-ab0a-552ebfc10fec" containerID="5bba91b626ad3219dcc2ccac55a24ac4389bad36882189f71a14e9f00a383bad" exitCode=0 Feb 24 15:00:01 crc kubenswrapper[4982]: I0224 15:00:01.953560 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" event={"ID":"fe339d91-5ca8-43ef-ab0a-552ebfc10fec","Type":"ContainerDied","Data":"5bba91b626ad3219dcc2ccac55a24ac4389bad36882189f71a14e9f00a383bad"} Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.213926 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.332086 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp94x\" (UniqueName: \"kubernetes.io/projected/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-kube-api-access-qp94x\") pod \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.332149 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-config-volume\") pod \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.332266 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-secret-volume\") pod \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\" (UID: \"fe339d91-5ca8-43ef-ab0a-552ebfc10fec\") " Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.333072 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe339d91-5ca8-43ef-ab0a-552ebfc10fec" (UID: "fe339d91-5ca8-43ef-ab0a-552ebfc10fec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.337595 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe339d91-5ca8-43ef-ab0a-552ebfc10fec" (UID: "fe339d91-5ca8-43ef-ab0a-552ebfc10fec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.337669 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-kube-api-access-qp94x" (OuterVolumeSpecName: "kube-api-access-qp94x") pod "fe339d91-5ca8-43ef-ab0a-552ebfc10fec" (UID: "fe339d91-5ca8-43ef-ab0a-552ebfc10fec"). InnerVolumeSpecName "kube-api-access-qp94x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.434684 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp94x\" (UniqueName: \"kubernetes.io/projected/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-kube-api-access-qp94x\") on node \"crc\" DevicePath \"\"" Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.434739 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 15:00:04 crc kubenswrapper[4982]: I0224 15:00:04.434782 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe339d91-5ca8-43ef-ab0a-552ebfc10fec-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 15:00:05 crc kubenswrapper[4982]: I0224 15:00:05.044958 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" event={"ID":"fe339d91-5ca8-43ef-ab0a-552ebfc10fec","Type":"ContainerDied","Data":"049e5923f5d3f2f6dc30f7018a153968b8cff3a7cc8c44d82bbb6c9c791fd77f"} Feb 24 15:00:05 crc kubenswrapper[4982]: I0224 15:00:05.045018 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="049e5923f5d3f2f6dc30f7018a153968b8cff3a7cc8c44d82bbb6c9c791fd77f" Feb 24 15:00:05 crc kubenswrapper[4982]: I0224 15:00:05.045101 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf" Feb 24 15:00:06 crc kubenswrapper[4982]: E0224 15:00:06.382016 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd787834f_4c51_4859_baa1_ae0f6e91b1a6.slice/crio-conmon-50b296e745f3aa0984f6b6aec06983dc6f0f77505d64261f5e23c62b014d3b02.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:00:07 crc kubenswrapper[4982]: I0224 15:00:07.063278 4982 generic.go:334] "Generic (PLEG): container finished" podID="d787834f-4c51-4859-baa1-ae0f6e91b1a6" containerID="50b296e745f3aa0984f6b6aec06983dc6f0f77505d64261f5e23c62b014d3b02" exitCode=0 Feb 24 15:00:07 crc kubenswrapper[4982]: I0224 15:00:07.063355 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532420-xw6gk" event={"ID":"d787834f-4c51-4859-baa1-ae0f6e91b1a6","Type":"ContainerDied","Data":"50b296e745f3aa0984f6b6aec06983dc6f0f77505d64261f5e23c62b014d3b02"} Feb 24 15:00:08 crc kubenswrapper[4982]: I0224 15:00:08.401302 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532420-xw6gk" Feb 24 15:00:08 crc kubenswrapper[4982]: I0224 15:00:08.499588 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4jnz\" (UniqueName: \"kubernetes.io/projected/d787834f-4c51-4859-baa1-ae0f6e91b1a6-kube-api-access-v4jnz\") pod \"d787834f-4c51-4859-baa1-ae0f6e91b1a6\" (UID: \"d787834f-4c51-4859-baa1-ae0f6e91b1a6\") " Feb 24 15:00:08 crc kubenswrapper[4982]: I0224 15:00:08.510779 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d787834f-4c51-4859-baa1-ae0f6e91b1a6-kube-api-access-v4jnz" (OuterVolumeSpecName: "kube-api-access-v4jnz") pod "d787834f-4c51-4859-baa1-ae0f6e91b1a6" (UID: "d787834f-4c51-4859-baa1-ae0f6e91b1a6"). InnerVolumeSpecName "kube-api-access-v4jnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:00:08 crc kubenswrapper[4982]: I0224 15:00:08.602117 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4jnz\" (UniqueName: \"kubernetes.io/projected/d787834f-4c51-4859-baa1-ae0f6e91b1a6-kube-api-access-v4jnz\") on node \"crc\" DevicePath \"\"" Feb 24 15:00:08 crc kubenswrapper[4982]: I0224 15:00:08.738668 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:00:08 crc kubenswrapper[4982]: I0224 15:00:08.738756 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:00:09 crc kubenswrapper[4982]: I0224 15:00:09.080259 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532420-xw6gk" event={"ID":"d787834f-4c51-4859-baa1-ae0f6e91b1a6","Type":"ContainerDied","Data":"3f5453664c22a778692e5959b3ea03d527d8feef13c28aec44bf3b43750ede02"} Feb 24 15:00:09 crc kubenswrapper[4982]: I0224 15:00:09.080299 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532420-xw6gk" Feb 24 15:00:09 crc kubenswrapper[4982]: I0224 15:00:09.080303 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f5453664c22a778692e5959b3ea03d527d8feef13c28aec44bf3b43750ede02" Feb 24 15:00:09 crc kubenswrapper[4982]: I0224 15:00:09.484245 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532414-26bx7"] Feb 24 15:00:09 crc kubenswrapper[4982]: I0224 15:00:09.491464 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532414-26bx7"] Feb 24 15:00:11 crc kubenswrapper[4982]: I0224 15:00:11.158128 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee4a86b-61ae-481f-9744-0635388f16ab" path="/var/lib/kubelet/pods/cee4a86b-61ae-481f-9744-0635388f16ab/volumes" Feb 24 15:00:19 crc kubenswrapper[4982]: I0224 15:00:19.438308 4982 scope.go:117] "RemoveContainer" containerID="a2a34f17354d1fd4d5d21f48d2aef3a0361410c6416d0ce43a0ca7b4f3a4d0f9" Feb 24 15:00:19 crc kubenswrapper[4982]: I0224 15:00:19.482490 4982 scope.go:117] "RemoveContainer" containerID="e06d51792bf74de0566b614c58673fefa982a507bc7142c43671e9080e46ab78" Feb 24 15:00:19 crc kubenswrapper[4982]: I0224 15:00:19.503366 4982 scope.go:117] "RemoveContainer" containerID="7cc68c3c8fd4404c6193cecd509e0b4736b396791b46e54417d0a9b79dd6969f" Feb 24 15:00:19 crc kubenswrapper[4982]: I0224 15:00:19.532080 4982 scope.go:117] "RemoveContainer" containerID="ad027a7c2604115a76c210314f8e96e20608e50676104392ff08f014a99e575c" Feb 24 15:00:19 crc kubenswrapper[4982]: I0224 15:00:19.552556 4982 scope.go:117] "RemoveContainer" containerID="21f25b7b3df41cd9c2ac09af30d904cf8da15f52894f1ea27461071ca796157e" Feb 24 15:00:19 crc kubenswrapper[4982]: I0224 15:00:19.571477 4982 scope.go:117] "RemoveContainer" containerID="7a3e7a6df569f4f6b595f721597f754515ff10e96f59fc2c39bf21bb89ca5c91" Feb 24 15:00:19 crc kubenswrapper[4982]: I0224 15:00:19.593475 4982 scope.go:117] "RemoveContainer" containerID="0c9bf3e712ada87a63ef2870877b499c03076930e9d7a74c367119890d3f7854" Feb 24 15:00:38 crc kubenswrapper[4982]: I0224 15:00:38.738938 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:00:38 crc kubenswrapper[4982]: I0224 15:00:38.739837 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:00:38 crc kubenswrapper[4982]: I0224 15:00:38.739909 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:00:38 crc kubenswrapper[4982]: I0224 15:00:38.740870 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5a98397f8b5beef975d846a08d561095dbc655637a46095abad7d674ae42009"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:00:38 crc kubenswrapper[4982]: I0224 15:00:38.741004 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://e5a98397f8b5beef975d846a08d561095dbc655637a46095abad7d674ae42009" gracePeriod=600 Feb 24 15:00:39 crc kubenswrapper[4982]: I0224 15:00:39.324401 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="e5a98397f8b5beef975d846a08d561095dbc655637a46095abad7d674ae42009" exitCode=0 Feb 24 15:00:39 crc kubenswrapper[4982]: I0224 15:00:39.324532 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"e5a98397f8b5beef975d846a08d561095dbc655637a46095abad7d674ae42009"} Feb 24 15:00:39 crc kubenswrapper[4982]: I0224 15:00:39.324878 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"07b5b5ef08503bdcb3a99da20553629509399385e0be96239d55f7f7f354eb91"} Feb 24 15:00:39 crc kubenswrapper[4982]: I0224 15:00:39.325587 4982 scope.go:117] "RemoveContainer" containerID="5f71e224afa19708cf06f60785deb400aee56bae1714124867d30c9a242dd993" Feb 24 15:01:19 crc kubenswrapper[4982]: I0224 15:01:19.702658 4982 scope.go:117] "RemoveContainer" containerID="84fff1ab43822deb16f29349929d6cf2a137167a695be0832998867321f11f60" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.514947 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb"] Feb 24 15:01:32 crc kubenswrapper[4982]: E0224 15:01:32.515925 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d787834f-4c51-4859-baa1-ae0f6e91b1a6" containerName="oc" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.515946 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d787834f-4c51-4859-baa1-ae0f6e91b1a6" containerName="oc" Feb 24 15:01:32 crc kubenswrapper[4982]: E0224 15:01:32.515965 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe339d91-5ca8-43ef-ab0a-552ebfc10fec" containerName="collect-profiles" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.515977 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe339d91-5ca8-43ef-ab0a-552ebfc10fec" containerName="collect-profiles" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.516179 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe339d91-5ca8-43ef-ab0a-552ebfc10fec" containerName="collect-profiles" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.516219 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d787834f-4c51-4859-baa1-ae0f6e91b1a6" containerName="oc" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.517897 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.520329 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.530860 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb"] Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.587162 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.587222 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsqfv\" (UniqueName: \"kubernetes.io/projected/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-kube-api-access-vsqfv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.587253 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.688445 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.688679 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.688740 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsqfv\" (UniqueName: \"kubernetes.io/projected/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-kube-api-access-vsqfv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.689473 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.689530 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.718926 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsqfv\" (UniqueName: \"kubernetes.io/projected/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-kube-api-access-vsqfv\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:32 crc kubenswrapper[4982]: I0224 15:01:32.838314 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:33 crc kubenswrapper[4982]: I0224 15:01:33.123116 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb"] Feb 24 15:01:33 crc kubenswrapper[4982]: I0224 15:01:33.793967 4982 generic.go:334] "Generic (PLEG): container finished" podID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerID="535185699d09a9726194f3bea5fd2992da7daabe4a041f4da1037d762cb1c2be" exitCode=0 Feb 24 15:01:33 crc kubenswrapper[4982]: I0224 15:01:33.794052 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" event={"ID":"ce4f631a-0a9c-4f06-9b04-1b4240f0900d","Type":"ContainerDied","Data":"535185699d09a9726194f3bea5fd2992da7daabe4a041f4da1037d762cb1c2be"} Feb 24 15:01:33 crc kubenswrapper[4982]: I0224 15:01:33.794349 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" event={"ID":"ce4f631a-0a9c-4f06-9b04-1b4240f0900d","Type":"ContainerStarted","Data":"328c8ea1f8a6b95dfb250c4af91d68fb50b41996ac86c8df0de5e31d305ac148"} Feb 24 15:01:36 crc kubenswrapper[4982]: I0224 15:01:36.817793 4982 generic.go:334] "Generic (PLEG): container finished" podID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerID="329fd8e8942280d2eeac151934c4acbfcb5b65e9b7afe04aa039182fd24c48b3" exitCode=0 Feb 24 15:01:36 crc kubenswrapper[4982]: I0224 15:01:36.817866 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" event={"ID":"ce4f631a-0a9c-4f06-9b04-1b4240f0900d","Type":"ContainerDied","Data":"329fd8e8942280d2eeac151934c4acbfcb5b65e9b7afe04aa039182fd24c48b3"} Feb 24 15:01:37 crc kubenswrapper[4982]: I0224 15:01:37.848889 4982 generic.go:334] "Generic (PLEG): container finished" podID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerID="d137065c5837a58d907f54570f1479e8faddb3fb08be010c0d0cffe72ea3ea8a" exitCode=0 Feb 24 15:01:37 crc kubenswrapper[4982]: I0224 15:01:37.848984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" event={"ID":"ce4f631a-0a9c-4f06-9b04-1b4240f0900d","Type":"ContainerDied","Data":"d137065c5837a58d907f54570f1479e8faddb3fb08be010c0d0cffe72ea3ea8a"} Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.162858 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.287863 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-bundle\") pod \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.287988 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-util\") pod \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.288073 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsqfv\" (UniqueName: \"kubernetes.io/projected/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-kube-api-access-vsqfv\") pod \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\" (UID: \"ce4f631a-0a9c-4f06-9b04-1b4240f0900d\") " Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.290905 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-bundle" (OuterVolumeSpecName: "bundle") pod "ce4f631a-0a9c-4f06-9b04-1b4240f0900d" (UID: "ce4f631a-0a9c-4f06-9b04-1b4240f0900d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.294008 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-kube-api-access-vsqfv" (OuterVolumeSpecName: "kube-api-access-vsqfv") pod "ce4f631a-0a9c-4f06-9b04-1b4240f0900d" (UID: "ce4f631a-0a9c-4f06-9b04-1b4240f0900d"). InnerVolumeSpecName "kube-api-access-vsqfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.299328 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-util" (OuterVolumeSpecName: "util") pod "ce4f631a-0a9c-4f06-9b04-1b4240f0900d" (UID: "ce4f631a-0a9c-4f06-9b04-1b4240f0900d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.390656 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-util\") on node \"crc\" DevicePath \"\"" Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.390702 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsqfv\" (UniqueName: \"kubernetes.io/projected/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-kube-api-access-vsqfv\") on node \"crc\" DevicePath \"\"" Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.390718 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-0a9c-4f06-9b04-1b4240f0900d-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.865362 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" event={"ID":"ce4f631a-0a9c-4f06-9b04-1b4240f0900d","Type":"ContainerDied","Data":"328c8ea1f8a6b95dfb250c4af91d68fb50b41996ac86c8df0de5e31d305ac148"} Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.865749 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="328c8ea1f8a6b95dfb250c4af91d68fb50b41996ac86c8df0de5e31d305ac148" Feb 24 15:01:39 crc kubenswrapper[4982]: I0224 15:01:39.865556 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.513690 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g"] Feb 24 15:01:49 crc kubenswrapper[4982]: E0224 15:01:49.514398 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerName="pull" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.514409 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerName="pull" Feb 24 15:01:49 crc kubenswrapper[4982]: E0224 15:01:49.514426 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerName="util" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.514432 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerName="util" Feb 24 15:01:49 crc kubenswrapper[4982]: E0224 15:01:49.514450 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerName="extract" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.514456 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerName="extract" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.514577 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4f631a-0a9c-4f06-9b04-1b4240f0900d" containerName="extract" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.514958 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.517545 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.517772 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.518646 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-bp5fr" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.527476 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.549094 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrdn\" (UniqueName: \"kubernetes.io/projected/5d19a6f9-587e-42fc-8dd5-1a363bac4c09-kube-api-access-mgrdn\") pod \"obo-prometheus-operator-68bc856cb9-wjv8g\" (UID: \"5d19a6f9-587e-42fc-8dd5-1a363bac4c09\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.554665 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.555369 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.557423 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kl7nz" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.557672 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.568153 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.571680 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.572338 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.599227 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.649931 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14ead058-d4ed-4e55-9632-a5e2f571b469-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb\" (UID: \"14ead058-d4ed-4e55-9632-a5e2f571b469\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.649978 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14ead058-d4ed-4e55-9632-a5e2f571b469-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb\" (UID: \"14ead058-d4ed-4e55-9632-a5e2f571b469\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.649998 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca2d2e72-fd73-4ad0-8d81-718235c7f891-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc\" (UID: \"ca2d2e72-fd73-4ad0-8d81-718235c7f891\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.650081 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca2d2e72-fd73-4ad0-8d81-718235c7f891-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc\" (UID: \"ca2d2e72-fd73-4ad0-8d81-718235c7f891\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.650257 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrdn\" (UniqueName: \"kubernetes.io/projected/5d19a6f9-587e-42fc-8dd5-1a363bac4c09-kube-api-access-mgrdn\") pod \"obo-prometheus-operator-68bc856cb9-wjv8g\" (UID: \"5d19a6f9-587e-42fc-8dd5-1a363bac4c09\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.666353 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrdn\" (UniqueName: \"kubernetes.io/projected/5d19a6f9-587e-42fc-8dd5-1a363bac4c09-kube-api-access-mgrdn\") pod \"obo-prometheus-operator-68bc856cb9-wjv8g\" (UID: \"5d19a6f9-587e-42fc-8dd5-1a363bac4c09\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.741577 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9bmsh"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.748130 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.750523 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.751050 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-px6tv" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.752231 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca2d2e72-fd73-4ad0-8d81-718235c7f891-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc\" (UID: \"ca2d2e72-fd73-4ad0-8d81-718235c7f891\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.752320 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14ead058-d4ed-4e55-9632-a5e2f571b469-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb\" (UID: \"14ead058-d4ed-4e55-9632-a5e2f571b469\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.752350 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14ead058-d4ed-4e55-9632-a5e2f571b469-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb\" (UID: \"14ead058-d4ed-4e55-9632-a5e2f571b469\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.752371 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca2d2e72-fd73-4ad0-8d81-718235c7f891-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc\" (UID: \"ca2d2e72-fd73-4ad0-8d81-718235c7f891\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.754301 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9bmsh"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.760336 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14ead058-d4ed-4e55-9632-a5e2f571b469-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb\" (UID: \"14ead058-d4ed-4e55-9632-a5e2f571b469\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.762711 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca2d2e72-fd73-4ad0-8d81-718235c7f891-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc\" (UID: \"ca2d2e72-fd73-4ad0-8d81-718235c7f891\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.773137 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca2d2e72-fd73-4ad0-8d81-718235c7f891-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc\" (UID: \"ca2d2e72-fd73-4ad0-8d81-718235c7f891\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.780138 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14ead058-d4ed-4e55-9632-a5e2f571b469-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb\" (UID: \"14ead058-d4ed-4e55-9632-a5e2f571b469\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.831050 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.843766 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4hqxq"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.844585 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.848568 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-5v8q2" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.853114 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef281ee3-4742-4dd3-947f-32c7f039f5ec-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4hqxq\" (UID: \"ef281ee3-4742-4dd3-947f-32c7f039f5ec\") " pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.853163 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8186e569-67ca-4273-9de2-130ffd7dcf09-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9bmsh\" (UID: \"8186e569-67ca-4273-9de2-130ffd7dcf09\") " pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.853198 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdpqt\" (UniqueName: \"kubernetes.io/projected/ef281ee3-4742-4dd3-947f-32c7f039f5ec-kube-api-access-fdpqt\") pod \"perses-operator-5bf474d74f-4hqxq\" (UID: \"ef281ee3-4742-4dd3-947f-32c7f039f5ec\") " pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.853221 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnf5t\" (UniqueName: \"kubernetes.io/projected/8186e569-67ca-4273-9de2-130ffd7dcf09-kube-api-access-gnf5t\") pod \"observability-operator-59bdc8b94-9bmsh\" (UID: \"8186e569-67ca-4273-9de2-130ffd7dcf09\") " pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.858720 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4hqxq"] Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.891446 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.896036 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.954144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdpqt\" (UniqueName: \"kubernetes.io/projected/ef281ee3-4742-4dd3-947f-32c7f039f5ec-kube-api-access-fdpqt\") pod \"perses-operator-5bf474d74f-4hqxq\" (UID: \"ef281ee3-4742-4dd3-947f-32c7f039f5ec\") " pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.954195 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnf5t\" (UniqueName: \"kubernetes.io/projected/8186e569-67ca-4273-9de2-130ffd7dcf09-kube-api-access-gnf5t\") pod \"observability-operator-59bdc8b94-9bmsh\" (UID: \"8186e569-67ca-4273-9de2-130ffd7dcf09\") " pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.954266 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef281ee3-4742-4dd3-947f-32c7f039f5ec-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4hqxq\" (UID: \"ef281ee3-4742-4dd3-947f-32c7f039f5ec\") " pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.954324 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8186e569-67ca-4273-9de2-130ffd7dcf09-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9bmsh\" (UID: \"8186e569-67ca-4273-9de2-130ffd7dcf09\") " pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.961198 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8186e569-67ca-4273-9de2-130ffd7dcf09-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9bmsh\" (UID: \"8186e569-67ca-4273-9de2-130ffd7dcf09\") " pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.971558 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ef281ee3-4742-4dd3-947f-32c7f039f5ec-openshift-service-ca\") pod \"perses-operator-5bf474d74f-4hqxq\" (UID: \"ef281ee3-4742-4dd3-947f-32c7f039f5ec\") " pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.981698 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdpqt\" (UniqueName: \"kubernetes.io/projected/ef281ee3-4742-4dd3-947f-32c7f039f5ec-kube-api-access-fdpqt\") pod \"perses-operator-5bf474d74f-4hqxq\" (UID: \"ef281ee3-4742-4dd3-947f-32c7f039f5ec\") " pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:01:49 crc kubenswrapper[4982]: I0224 15:01:49.985597 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnf5t\" (UniqueName: \"kubernetes.io/projected/8186e569-67ca-4273-9de2-130ffd7dcf09-kube-api-access-gnf5t\") pod \"observability-operator-59bdc8b94-9bmsh\" (UID: \"8186e569-67ca-4273-9de2-130ffd7dcf09\") " pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.107855 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.209221 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.291155 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g"] Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.336621 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9bmsh"] Feb 24 15:01:50 crc kubenswrapper[4982]: W0224 15:01:50.347589 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8186e569_67ca_4273_9de2_130ffd7dcf09.slice/crio-7a8d1f058815959067c61e3da02a40216b96ed7d5bf65af2c6ca63de6ace12db WatchSource:0}: Error finding container 7a8d1f058815959067c61e3da02a40216b96ed7d5bf65af2c6ca63de6ace12db: Status 404 returned error can't find the container with id 7a8d1f058815959067c61e3da02a40216b96ed7d5bf65af2c6ca63de6ace12db Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.376104 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc"] Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.436756 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb"] Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.689875 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-4hqxq"] Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.935973 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" event={"ID":"ca2d2e72-fd73-4ad0-8d81-718235c7f891","Type":"ContainerStarted","Data":"d407abaa06e7db73162fb46c4d1f86364232ba75328d69d2fd323472e5c32809"} Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.936941 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" event={"ID":"8186e569-67ca-4273-9de2-130ffd7dcf09","Type":"ContainerStarted","Data":"7a8d1f058815959067c61e3da02a40216b96ed7d5bf65af2c6ca63de6ace12db"} Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.938000 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g" event={"ID":"5d19a6f9-587e-42fc-8dd5-1a363bac4c09","Type":"ContainerStarted","Data":"1fa4cbd0d7691ef097142c263e614ced6cd02fcac342a1dc477dcd5377000630"} Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.939516 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" event={"ID":"14ead058-d4ed-4e55-9632-a5e2f571b469","Type":"ContainerStarted","Data":"5c8d640f6439339d68af83a7fa821c807cbd97d4f9c771aaca9c0b7c13751144"} Feb 24 15:01:50 crc kubenswrapper[4982]: I0224 15:01:50.940591 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" event={"ID":"ef281ee3-4742-4dd3-947f-32c7f039f5ec","Type":"ContainerStarted","Data":"3efc5ba3eea6ee826879a4cb7e313ef1003288d6d558689b5b07a56b1e00ab50"} Feb 24 15:01:53 crc kubenswrapper[4982]: I0224 15:01:53.968489 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-96fkj"] Feb 24 15:01:53 crc kubenswrapper[4982]: I0224 15:01:53.969091 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovn-controller" containerID="cri-o://0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641" gracePeriod=30 Feb 24 15:01:53 crc kubenswrapper[4982]: I0224 15:01:53.969145 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="nbdb" containerID="cri-o://4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003" gracePeriod=30 Feb 24 15:01:53 crc kubenswrapper[4982]: I0224 15:01:53.969193 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="northd" containerID="cri-o://3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831" gracePeriod=30 Feb 24 15:01:53 crc kubenswrapper[4982]: I0224 15:01:53.969237 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b" gracePeriod=30 Feb 24 15:01:53 crc kubenswrapper[4982]: I0224 15:01:53.969265 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kube-rbac-proxy-node" containerID="cri-o://2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa" gracePeriod=30 Feb 24 15:01:53 crc kubenswrapper[4982]: I0224 15:01:53.969292 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovn-acl-logging" containerID="cri-o://8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e" gracePeriod=30 Feb 24 15:01:53 crc kubenswrapper[4982]: I0224 15:01:53.969469 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="sbdb" containerID="cri-o://6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42" gracePeriod=30 Feb 24 15:01:54 crc kubenswrapper[4982]: I0224 15:01:54.048307 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" containerID="cri-o://6ca186df7bd9edf5224331d00a096936d7109a834bb502205b6c3fdf09f5c8ec" gracePeriod=30 Feb 24 15:01:54 crc kubenswrapper[4982]: I0224 15:01:54.988543 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/2.log" Feb 24 15:01:54 crc kubenswrapper[4982]: I0224 15:01:54.989607 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/1.log" Feb 24 15:01:54 crc kubenswrapper[4982]: I0224 15:01:54.989701 4982 generic.go:334] "Generic (PLEG): container finished" podID="86687a8a-6996-44fa-a62e-b43266c31922" containerID="6433821ead0065df5901d646c534d5091e0f54c83e36192db68997a515b90593" exitCode=2 Feb 24 15:01:54 crc kubenswrapper[4982]: I0224 15:01:54.989823 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgtdj" event={"ID":"86687a8a-6996-44fa-a62e-b43266c31922","Type":"ContainerDied","Data":"6433821ead0065df5901d646c534d5091e0f54c83e36192db68997a515b90593"} Feb 24 15:01:54 crc kubenswrapper[4982]: I0224 15:01:54.989902 4982 scope.go:117] "RemoveContainer" containerID="1818ec28666d9dfaa15e2c78a8fcdaabb66f2a34f05eda2ca38ef6d7b5e81cae" Feb 24 15:01:54 crc kubenswrapper[4982]: I0224 15:01:54.990569 4982 scope.go:117] "RemoveContainer" containerID="6433821ead0065df5901d646c534d5091e0f54c83e36192db68997a515b90593" Feb 24 15:01:54 crc kubenswrapper[4982]: E0224 15:01:54.990954 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jgtdj_openshift-multus(86687a8a-6996-44fa-a62e-b43266c31922)\"" pod="openshift-multus/multus-jgtdj" podUID="86687a8a-6996-44fa-a62e-b43266c31922" Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.004259 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/3.log" Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.007732 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovn-acl-logging/0.log" Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008278 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovn-controller/0.log" Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008738 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="6ca186df7bd9edf5224331d00a096936d7109a834bb502205b6c3fdf09f5c8ec" exitCode=0 Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008773 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42" exitCode=0 Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008783 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003" exitCode=0 Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008793 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831" exitCode=0 Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008802 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e" exitCode=143 Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008811 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641" exitCode=143 Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008833 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"6ca186df7bd9edf5224331d00a096936d7109a834bb502205b6c3fdf09f5c8ec"} Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42"} Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008876 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003"} Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008888 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831"} Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008900 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e"} Feb 24 15:01:55 crc kubenswrapper[4982]: I0224 15:01:55.008913 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641"} Feb 24 15:01:56 crc kubenswrapper[4982]: I0224 15:01:56.020845 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovnkube-controller/3.log" Feb 24 15:01:56 crc kubenswrapper[4982]: I0224 15:01:56.024587 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovn-acl-logging/0.log" Feb 24 15:01:56 crc kubenswrapper[4982]: I0224 15:01:56.025403 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovn-controller/0.log" Feb 24 15:01:56 crc kubenswrapper[4982]: I0224 15:01:56.025932 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b" exitCode=0 Feb 24 15:01:56 crc kubenswrapper[4982]: I0224 15:01:56.025977 4982 generic.go:334] "Generic (PLEG): container finished" podID="91cccac8-913c-4bcf-a654-298dfce0a471" containerID="2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa" exitCode=0 Feb 24 15:01:56 crc kubenswrapper[4982]: I0224 15:01:56.026010 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b"} Feb 24 15:01:56 crc kubenswrapper[4982]: I0224 15:01:56.026051 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa"} Feb 24 15:01:58 crc kubenswrapper[4982]: E0224 15:01:58.484592 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42 is running failed: container process not found" containerID="6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 15:01:58 crc kubenswrapper[4982]: E0224 15:01:58.484617 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003 is running failed: container process not found" containerID="4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 15:01:58 crc kubenswrapper[4982]: E0224 15:01:58.485238 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42 is running failed: container process not found" containerID="6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 15:01:58 crc kubenswrapper[4982]: E0224 15:01:58.485297 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003 is running failed: container process not found" containerID="4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 15:01:58 crc kubenswrapper[4982]: E0224 15:01:58.485471 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003 is running failed: container process not found" containerID="4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 24 15:01:58 crc kubenswrapper[4982]: E0224 15:01:58.485492 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="nbdb" Feb 24 15:01:58 crc kubenswrapper[4982]: E0224 15:01:58.485581 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42 is running failed: container process not found" containerID="6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 24 15:01:58 crc kubenswrapper[4982]: E0224 15:01:58.485596 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="sbdb" Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.124440 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532422-znpkl"] Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.125448 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.127354 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.128484 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.130410 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.146943 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7p5\" (UniqueName: \"kubernetes.io/projected/f18680cf-a054-40f5-af8d-54eec7b94616-kube-api-access-xp7p5\") pod \"auto-csr-approver-29532422-znpkl\" (UID: \"f18680cf-a054-40f5-af8d-54eec7b94616\") " pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.247918 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7p5\" (UniqueName: \"kubernetes.io/projected/f18680cf-a054-40f5-af8d-54eec7b94616-kube-api-access-xp7p5\") pod \"auto-csr-approver-29532422-znpkl\" (UID: \"f18680cf-a054-40f5-af8d-54eec7b94616\") " pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.283165 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7p5\" (UniqueName: \"kubernetes.io/projected/f18680cf-a054-40f5-af8d-54eec7b94616-kube-api-access-xp7p5\") pod \"auto-csr-approver-29532422-znpkl\" (UID: \"f18680cf-a054-40f5-af8d-54eec7b94616\") " pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:00 crc kubenswrapper[4982]: I0224 15:02:00.437582 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.274934 4982 scope.go:117] "RemoveContainer" containerID="73edaa71fd5db79b2da345e549e14dc63809ed37401de55fd237bdabc1d0d9d3" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.308026 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovn-acl-logging/0.log" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.308491 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovn-controller/0.log" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.309104 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.360066 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29532422-znpkl_openshift-infra_f18680cf-a054-40f5-af8d-54eec7b94616_0(1bf12fcc99dfb8a05477ed232aae990240112e7a155884517918ddf32dc8dcf3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.360133 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29532422-znpkl_openshift-infra_f18680cf-a054-40f5-af8d-54eec7b94616_0(1bf12fcc99dfb8a05477ed232aae990240112e7a155884517918ddf32dc8dcf3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.360155 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29532422-znpkl_openshift-infra_f18680cf-a054-40f5-af8d-54eec7b94616_0(1bf12fcc99dfb8a05477ed232aae990240112e7a155884517918ddf32dc8dcf3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.360192 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29532422-znpkl_openshift-infra(f18680cf-a054-40f5-af8d-54eec7b94616)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29532422-znpkl_openshift-infra(f18680cf-a054-40f5-af8d-54eec7b94616)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29532422-znpkl_openshift-infra_f18680cf-a054-40f5-af8d-54eec7b94616_0(1bf12fcc99dfb8a05477ed232aae990240112e7a155884517918ddf32dc8dcf3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29532422-znpkl" podUID="f18680cf-a054-40f5-af8d-54eec7b94616" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.396406 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dbnnq"] Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397215 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397233 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397264 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovn-acl-logging" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397272 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovn-acl-logging" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397283 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kubecfg-setup" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397291 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kubecfg-setup" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397314 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovn-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397322 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovn-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397331 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kube-rbac-proxy-node" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397339 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kube-rbac-proxy-node" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397349 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="sbdb" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397357 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="sbdb" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397368 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="nbdb" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397375 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="nbdb" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397387 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="northd" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397394 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="northd" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397403 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397409 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397418 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397424 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397434 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397439 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397573 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="sbdb" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397585 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="northd" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397595 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397602 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397610 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397618 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="nbdb" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397627 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovn-acl-logging" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397635 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="kube-rbac-proxy-node" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397642 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397649 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovn-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397825 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397834 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: E0224 15:02:02.397847 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.397876 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.398028 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.398039 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" containerName="ovnkube-controller" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.405474 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.476716 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91cccac8-913c-4bcf-a654-298dfce0a471-ovn-node-metrics-cert\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.476957 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-ovn\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477047 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-var-lib-openvswitch\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477129 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-log-socket\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477276 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-openvswitch\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477375 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-netns\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477454 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-systemd\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477553 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-script-lib\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477638 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-systemd-units\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477738 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-kubelet\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477826 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-bin\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477910 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-config\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477992 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-var-lib-cni-networks-ovn-kubernetes\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.478087 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-etc-openvswitch\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477567 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477572 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477593 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-log-socket" (OuterVolumeSpecName: "log-socket") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477613 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477632 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.477960 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.478701 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.478845 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.478874 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.478897 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.479413 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.479443 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.480607 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.478211 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-env-overrides\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.480979 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-slash\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481057 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-netd\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481141 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-ovn-kubernetes\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481240 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjhfr\" (UniqueName: \"kubernetes.io/projected/91cccac8-913c-4bcf-a654-298dfce0a471-kube-api-access-tjhfr\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481333 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-node-log\") pod \"91cccac8-913c-4bcf-a654-298dfce0a471\" (UID: \"91cccac8-913c-4bcf-a654-298dfce0a471\") " Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481285 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481299 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-slash" (OuterVolumeSpecName: "host-slash") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-node-log" (OuterVolumeSpecName: "node-log") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.481985 4982 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-slash\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482060 4982 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482123 4982 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482187 4982 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-node-log\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482254 4982 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482337 4982 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-log-socket\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482437 4982 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482622 4982 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482702 4982 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482767 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482826 4982 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.482841 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91cccac8-913c-4bcf-a654-298dfce0a471-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.483436 4982 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.483458 4982 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.483467 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.483477 4982 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.483487 4982 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.483509 4982 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91cccac8-913c-4bcf-a654-298dfce0a471-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.484997 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91cccac8-913c-4bcf-a654-298dfce0a471-kube-api-access-tjhfr" (OuterVolumeSpecName: "kube-api-access-tjhfr") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "kube-api-access-tjhfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585074 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-kubelet\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585145 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-etc-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585264 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-slash\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585332 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-log-socket\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585356 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovn-node-metrics-cert\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585402 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-env-overrides\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585461 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-node-log\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585525 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585568 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-systemd\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585635 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-run-ovn-kubernetes\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585680 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-var-lib-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585704 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-cni-netd\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585748 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-cni-bin\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585771 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovnkube-config\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585799 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z7wm\" (UniqueName: \"kubernetes.io/projected/2c7c3e92-d40a-4c79-ae08-921c3ee07519-kube-api-access-2z7wm\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585873 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585939 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-run-netns\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585959 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-ovn\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.585980 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovnkube-script-lib\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.586001 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-systemd-units\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.586120 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjhfr\" (UniqueName: \"kubernetes.io/projected/91cccac8-913c-4bcf-a654-298dfce0a471-kube-api-access-tjhfr\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.586141 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91cccac8-913c-4bcf-a654-298dfce0a471-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.591097 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "91cccac8-913c-4bcf-a654-298dfce0a471" (UID: "91cccac8-913c-4bcf-a654-298dfce0a471"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.687879 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovn-node-metrics-cert\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.687941 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-env-overrides\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.687980 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-node-log\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688003 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688036 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-systemd\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688057 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-run-ovn-kubernetes\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688081 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-var-lib-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688103 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-cni-netd\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688124 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-cni-bin\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovnkube-config\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688173 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z7wm\" (UniqueName: \"kubernetes.io/projected/2c7c3e92-d40a-4c79-ae08-921c3ee07519-kube-api-access-2z7wm\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688206 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688231 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-ovn\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688250 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-run-netns\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovnkube-script-lib\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-systemd-units\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688314 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-kubelet\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688334 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-etc-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688356 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-slash\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-log-socket\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688438 4982 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91cccac8-913c-4bcf-a654-298dfce0a471-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.688483 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-log-socket\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689078 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-env-overrides\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689129 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-node-log\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689157 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689182 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-systemd\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689209 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-run-ovn-kubernetes\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689237 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-var-lib-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-cni-netd\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689481 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-cni-bin\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.689949 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovnkube-config\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.690273 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-systemd-units\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.690311 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-kubelet\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.690339 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-etc-openvswitch\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.690367 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-slash\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.690396 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-run-ovn\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.690422 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.690452 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c7c3e92-d40a-4c79-ae08-921c3ee07519-host-run-netns\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.691014 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovnkube-script-lib\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.694606 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c7c3e92-d40a-4c79-ae08-921c3ee07519-ovn-node-metrics-cert\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.715884 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z7wm\" (UniqueName: \"kubernetes.io/projected/2c7c3e92-d40a-4c79-ae08-921c3ee07519-kube-api-access-2z7wm\") pod \"ovnkube-node-dbnnq\" (UID: \"2c7c3e92-d40a-4c79-ae08-921c3ee07519\") " pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:02 crc kubenswrapper[4982]: I0224 15:02:02.740883 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.079978 4982 generic.go:334] "Generic (PLEG): container finished" podID="2c7c3e92-d40a-4c79-ae08-921c3ee07519" containerID="9c1057564538ca2043945f55fe38ab2d6a57272ea4a8917c17b032d96d491ddd" exitCode=0 Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.080063 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerDied","Data":"9c1057564538ca2043945f55fe38ab2d6a57272ea4a8917c17b032d96d491ddd"} Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.080615 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"087b99591be72bc091560b8481723507dfe42364370ca181ae29535c1807daf0"} Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.084948 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" event={"ID":"ca2d2e72-fd73-4ad0-8d81-718235c7f891","Type":"ContainerStarted","Data":"33bbed730ead7234d259e2f99d98e4797c4881b0c0f1a34fb8d94f1d67e4d4d7"} Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.086607 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" event={"ID":"8186e569-67ca-4273-9de2-130ffd7dcf09","Type":"ContainerStarted","Data":"5b62b020480db14bd7bc76c313abb0c98951a5538729abc5047b8ffb1fa7e1ac"} Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.087186 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.094599 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovn-acl-logging/0.log" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.095202 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-96fkj_91cccac8-913c-4bcf-a654-298dfce0a471/ovn-controller/0.log" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.095641 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" event={"ID":"91cccac8-913c-4bcf-a654-298dfce0a471","Type":"ContainerDied","Data":"68f0edfaec0a349fa2bca761d212b8897185bcd1ed3f8156ce092422d8f7c3b0"} Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.095693 4982 scope.go:117] "RemoveContainer" containerID="6ca186df7bd9edf5224331d00a096936d7109a834bb502205b6c3fdf09f5c8ec" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.095876 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-96fkj" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.107344 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g" event={"ID":"5d19a6f9-587e-42fc-8dd5-1a363bac4c09","Type":"ContainerStarted","Data":"818aabb41007e821afcec5a2f97d9a9e2c29243792285a3d5530096ce96bf124"} Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.109894 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/2.log" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.111049 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" event={"ID":"14ead058-d4ed-4e55-9632-a5e2f571b469","Type":"ContainerStarted","Data":"902d1cbaaa2d294c872cc6ea2d4063bd8bd7a475adbc381bb5ced3340b984147"} Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.112444 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" event={"ID":"ef281ee3-4742-4dd3-947f-32c7f039f5ec","Type":"ContainerStarted","Data":"f275dd5664f5fa07bb9c05b68177e17bbee0cf3192e1ebe6805df8f8ded6ecc6"} Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.113100 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.132914 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" podStartSLOduration=2.139543308 podStartE2EDuration="14.132889242s" podCreationTimestamp="2026-02-24 15:01:49 +0000 UTC" firstStartedPulling="2026-02-24 15:01:50.350643848 +0000 UTC m=+771.969702341" lastFinishedPulling="2026-02-24 15:02:02.343989782 +0000 UTC m=+783.963048275" observedRunningTime="2026-02-24 15:02:03.125726109 +0000 UTC m=+784.744784612" watchObservedRunningTime="2026-02-24 15:02:03.132889242 +0000 UTC m=+784.751947735" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.141383 4982 scope.go:117] "RemoveContainer" containerID="6e06b29f7c67ea3e357a49c814a560af2dd4bf80243eccbd48b00363076abc42" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.156030 4982 scope.go:117] "RemoveContainer" containerID="4db08349dbcdcc3f1be1473021ddd41d594be972dbab6e055ef921e3ce1b8003" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.161790 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9bmsh" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.183750 4982 scope.go:117] "RemoveContainer" containerID="3b78625da3d98f41a978ae686c07b5a3b4c8f6d91c54c350a79107d4ff21a831" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.201881 4982 scope.go:117] "RemoveContainer" containerID="ceb920887fbfaa9bfdf91a65e8f9941fa071443d2db2bfcd3807a6330002aa8b" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.226483 4982 scope.go:117] "RemoveContainer" containerID="2dc212151e0b19eb5ac6d453ba8757fe625c0c874c276478a4568fd6f624cdaa" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.227863 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc" podStartSLOduration=2.267651859 podStartE2EDuration="14.22784275s" podCreationTimestamp="2026-02-24 15:01:49 +0000 UTC" firstStartedPulling="2026-02-24 15:01:50.388709783 +0000 UTC m=+772.007768276" lastFinishedPulling="2026-02-24 15:02:02.348900674 +0000 UTC m=+783.967959167" observedRunningTime="2026-02-24 15:02:03.220387019 +0000 UTC m=+784.839445512" watchObservedRunningTime="2026-02-24 15:02:03.22784275 +0000 UTC m=+784.846901243" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.274263 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjv8g" podStartSLOduration=2.285855318 podStartE2EDuration="14.274246659s" podCreationTimestamp="2026-02-24 15:01:49 +0000 UTC" firstStartedPulling="2026-02-24 15:01:50.30507515 +0000 UTC m=+771.924133643" lastFinishedPulling="2026-02-24 15:02:02.293466481 +0000 UTC m=+783.912524984" observedRunningTime="2026-02-24 15:02:03.273464058 +0000 UTC m=+784.892522551" watchObservedRunningTime="2026-02-24 15:02:03.274246659 +0000 UTC m=+784.893305142" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.303921 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" podStartSLOduration=2.706251843 podStartE2EDuration="14.303900198s" podCreationTimestamp="2026-02-24 15:01:49 +0000 UTC" firstStartedPulling="2026-02-24 15:01:50.695396174 +0000 UTC m=+772.314454667" lastFinishedPulling="2026-02-24 15:02:02.293044529 +0000 UTC m=+783.912103022" observedRunningTime="2026-02-24 15:02:03.296953951 +0000 UTC m=+784.916012444" watchObservedRunningTime="2026-02-24 15:02:03.303900198 +0000 UTC m=+784.922958691" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.317418 4982 scope.go:117] "RemoveContainer" containerID="8831d50c1c62ec471a0b903d69a3bc8e1198a64a9944e5b616b47910d5e9568e" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.340101 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-96fkj"] Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.350700 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-96fkj"] Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.397414 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb" podStartSLOduration=2.554235427 podStartE2EDuration="14.397394597s" podCreationTimestamp="2026-02-24 15:01:49 +0000 UTC" firstStartedPulling="2026-02-24 15:01:50.450324822 +0000 UTC m=+772.069383315" lastFinishedPulling="2026-02-24 15:02:02.293483972 +0000 UTC m=+783.912542485" observedRunningTime="2026-02-24 15:02:03.382728731 +0000 UTC m=+785.001787234" watchObservedRunningTime="2026-02-24 15:02:03.397394597 +0000 UTC m=+785.016453090" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.425985 4982 scope.go:117] "RemoveContainer" containerID="0703a834fab99307aa5248a23aeb232920a12ab2faf150accbb294d56a320641" Feb 24 15:02:03 crc kubenswrapper[4982]: I0224 15:02:03.470723 4982 scope.go:117] "RemoveContainer" containerID="c91b55c6c518beb8c07d3924d4251f311bbc1824c1a601124925f632cfc6e5a0" Feb 24 15:02:04 crc kubenswrapper[4982]: I0224 15:02:04.123120 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"e6cc47ddf94bf30fa28aef12844eb316528bd88e229dec6f09914fe55634c5b1"} Feb 24 15:02:04 crc kubenswrapper[4982]: I0224 15:02:04.123401 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"6d33dcf560490611b0d1d75c857399f23feddbeb1a822819e916f291df42686e"} Feb 24 15:02:04 crc kubenswrapper[4982]: I0224 15:02:04.124617 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"d93d59dfccf27ca84fc43b6308c9b3efe520c2c362052824118d3b6e0b0a80ca"} Feb 24 15:02:04 crc kubenswrapper[4982]: I0224 15:02:04.124627 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"4dfbb3e2e087d40b038d15e7988dbab2c1961e027edae4882d1644b40a224aea"} Feb 24 15:02:04 crc kubenswrapper[4982]: I0224 15:02:04.124636 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"6d0c0e45440e99b48d3a4b7e5ca4a209259ef144571631f77b717efdc9e1e1ae"} Feb 24 15:02:04 crc kubenswrapper[4982]: I0224 15:02:04.124645 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"bd36ac87a0e39bf5ea8d7679dc450ac414f58b0bf80eb3d49deb35b84dcea442"} Feb 24 15:02:05 crc kubenswrapper[4982]: I0224 15:02:05.154905 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91cccac8-913c-4bcf-a654-298dfce0a471" path="/var/lib/kubelet/pods/91cccac8-913c-4bcf-a654-298dfce0a471/volumes" Feb 24 15:02:07 crc kubenswrapper[4982]: I0224 15:02:07.146635 4982 scope.go:117] "RemoveContainer" containerID="6433821ead0065df5901d646c534d5091e0f54c83e36192db68997a515b90593" Feb 24 15:02:07 crc kubenswrapper[4982]: E0224 15:02:07.147032 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-jgtdj_openshift-multus(86687a8a-6996-44fa-a62e-b43266c31922)\"" pod="openshift-multus/multus-jgtdj" podUID="86687a8a-6996-44fa-a62e-b43266c31922" Feb 24 15:02:07 crc kubenswrapper[4982]: I0224 15:02:07.157727 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"44bd2d249e4aa010dbb5822954a2ef8cba05074fafc1bd6287efccf22a86c065"} Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.176222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" event={"ID":"2c7c3e92-d40a-4c79-ae08-921c3ee07519","Type":"ContainerStarted","Data":"044bc5e8795caefd785c5e11c6a059f06aa1993fa1859f233bc9ac7f0cca0c6e"} Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.176673 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.176687 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.176696 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.191616 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532422-znpkl"] Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.191751 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.192206 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.213807 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" podStartSLOduration=7.213791548 podStartE2EDuration="7.213791548s" podCreationTimestamp="2026-02-24 15:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:02:09.210128309 +0000 UTC m=+790.829186802" watchObservedRunningTime="2026-02-24 15:02:09.213791548 +0000 UTC m=+790.832850041" Feb 24 15:02:09 crc kubenswrapper[4982]: E0224 15:02:09.250404 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29532422-znpkl_openshift-infra_f18680cf-a054-40f5-af8d-54eec7b94616_0(df82714cb3f4c272e44d27a1e2cd580a9cf13fa5c38dd35b429c3903cd3e802a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 15:02:09 crc kubenswrapper[4982]: E0224 15:02:09.250513 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29532422-znpkl_openshift-infra_f18680cf-a054-40f5-af8d-54eec7b94616_0(df82714cb3f4c272e44d27a1e2cd580a9cf13fa5c38dd35b429c3903cd3e802a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:09 crc kubenswrapper[4982]: E0224 15:02:09.250539 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29532422-znpkl_openshift-infra_f18680cf-a054-40f5-af8d-54eec7b94616_0(df82714cb3f4c272e44d27a1e2cd580a9cf13fa5c38dd35b429c3903cd3e802a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:09 crc kubenswrapper[4982]: E0224 15:02:09.250592 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29532422-znpkl_openshift-infra(f18680cf-a054-40f5-af8d-54eec7b94616)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29532422-znpkl_openshift-infra(f18680cf-a054-40f5-af8d-54eec7b94616)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29532422-znpkl_openshift-infra_f18680cf-a054-40f5-af8d-54eec7b94616_0(df82714cb3f4c272e44d27a1e2cd580a9cf13fa5c38dd35b429c3903cd3e802a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29532422-znpkl" podUID="f18680cf-a054-40f5-af8d-54eec7b94616" Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.251543 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:09 crc kubenswrapper[4982]: I0224 15:02:09.278626 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.212613 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-4hqxq" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.924840 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5"] Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.925570 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.929512 4982 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-g74dk" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.930673 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.933157 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qw7f\" (UniqueName: \"kubernetes.io/projected/906e9aec-8f03-4230-8b2b-01459a8c2fcc-kube-api-access-5qw7f\") pod \"cert-manager-cainjector-cf98fcc89-mpsc5\" (UID: \"906e9aec-8f03-4230-8b2b-01459a8c2fcc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.939798 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-ll6lf"] Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.940641 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.942448 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.943750 4982 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rrd9v" Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.951720 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5"] Feb 24 15:02:10 crc kubenswrapper[4982]: I0224 15:02:10.956987 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ll6lf"] Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.003368 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4zfz6"] Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.004753 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.025011 4982 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kwj4h" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.025777 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4zfz6"] Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.034534 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qw7f\" (UniqueName: \"kubernetes.io/projected/906e9aec-8f03-4230-8b2b-01459a8c2fcc-kube-api-access-5qw7f\") pod \"cert-manager-cainjector-cf98fcc89-mpsc5\" (UID: \"906e9aec-8f03-4230-8b2b-01459a8c2fcc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.081571 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qw7f\" (UniqueName: \"kubernetes.io/projected/906e9aec-8f03-4230-8b2b-01459a8c2fcc-kube-api-access-5qw7f\") pod \"cert-manager-cainjector-cf98fcc89-mpsc5\" (UID: \"906e9aec-8f03-4230-8b2b-01459a8c2fcc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.135751 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr2wq\" (UniqueName: \"kubernetes.io/projected/8422a1e6-92b5-4b34-a360-004609a25ac0-kube-api-access-nr2wq\") pod \"cert-manager-webhook-687f57d79b-4zfz6\" (UID: \"8422a1e6-92b5-4b34-a360-004609a25ac0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.135804 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55gpb\" (UniqueName: \"kubernetes.io/projected/d543e947-1fd6-4253-84c8-5dd81a835ba4-kube-api-access-55gpb\") pod \"cert-manager-858654f9db-ll6lf\" (UID: \"d543e947-1fd6-4253-84c8-5dd81a835ba4\") " pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.238281 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr2wq\" (UniqueName: \"kubernetes.io/projected/8422a1e6-92b5-4b34-a360-004609a25ac0-kube-api-access-nr2wq\") pod \"cert-manager-webhook-687f57d79b-4zfz6\" (UID: \"8422a1e6-92b5-4b34-a360-004609a25ac0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.238580 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55gpb\" (UniqueName: \"kubernetes.io/projected/d543e947-1fd6-4253-84c8-5dd81a835ba4-kube-api-access-55gpb\") pod \"cert-manager-858654f9db-ll6lf\" (UID: \"d543e947-1fd6-4253-84c8-5dd81a835ba4\") " pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.239007 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.259470 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55gpb\" (UniqueName: \"kubernetes.io/projected/d543e947-1fd6-4253-84c8-5dd81a835ba4-kube-api-access-55gpb\") pod \"cert-manager-858654f9db-ll6lf\" (UID: \"d543e947-1fd6-4253-84c8-5dd81a835ba4\") " pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.259616 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr2wq\" (UniqueName: \"kubernetes.io/projected/8422a1e6-92b5-4b34-a360-004609a25ac0-kube-api-access-nr2wq\") pod \"cert-manager-webhook-687f57d79b-4zfz6\" (UID: \"8422a1e6-92b5-4b34-a360-004609a25ac0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.279066 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager_906e9aec-8f03-4230-8b2b-01459a8c2fcc_0(f34c1caac6ee37fc358ea2f6e723b081ccafe475ba6c76694e68389cfc2cbe0c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.279129 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager_906e9aec-8f03-4230-8b2b-01459a8c2fcc_0(f34c1caac6ee37fc358ea2f6e723b081ccafe475ba6c76694e68389cfc2cbe0c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.279149 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager_906e9aec-8f03-4230-8b2b-01459a8c2fcc_0(f34c1caac6ee37fc358ea2f6e723b081ccafe475ba6c76694e68389cfc2cbe0c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.279202 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager(906e9aec-8f03-4230-8b2b-01459a8c2fcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager(906e9aec-8f03-4230-8b2b-01459a8c2fcc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager_906e9aec-8f03-4230-8b2b-01459a8c2fcc_0(f34c1caac6ee37fc358ea2f6e723b081ccafe475ba6c76694e68389cfc2cbe0c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" podUID="906e9aec-8f03-4230-8b2b-01459a8c2fcc" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.329078 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.351526 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-4zfz6_cert-manager_8422a1e6-92b5-4b34-a360-004609a25ac0_0(61540257a7f988f4e615ac668c37c17a5ab2e27229b3b763f1833ae04491153a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.351610 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-4zfz6_cert-manager_8422a1e6-92b5-4b34-a360-004609a25ac0_0(61540257a7f988f4e615ac668c37c17a5ab2e27229b3b763f1833ae04491153a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.351632 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-4zfz6_cert-manager_8422a1e6-92b5-4b34-a360-004609a25ac0_0(61540257a7f988f4e615ac668c37c17a5ab2e27229b3b763f1833ae04491153a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.351673 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-4zfz6_cert-manager(8422a1e6-92b5-4b34-a360-004609a25ac0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-4zfz6_cert-manager(8422a1e6-92b5-4b34-a360-004609a25ac0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-4zfz6_cert-manager_8422a1e6-92b5-4b34-a360-004609a25ac0_0(61540257a7f988f4e615ac668c37c17a5ab2e27229b3b763f1833ae04491153a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" podUID="8422a1e6-92b5-4b34-a360-004609a25ac0" Feb 24 15:02:11 crc kubenswrapper[4982]: I0224 15:02:11.552310 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.577868 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-ll6lf_cert-manager_d543e947-1fd6-4253-84c8-5dd81a835ba4_0(640bab1a7edd958f96d92a950070914d99235955c2b53af32d7d9a2f19965b06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.577938 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-ll6lf_cert-manager_d543e947-1fd6-4253-84c8-5dd81a835ba4_0(640bab1a7edd958f96d92a950070914d99235955c2b53af32d7d9a2f19965b06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.577963 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-ll6lf_cert-manager_d543e947-1fd6-4253-84c8-5dd81a835ba4_0(640bab1a7edd958f96d92a950070914d99235955c2b53af32d7d9a2f19965b06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:11 crc kubenswrapper[4982]: E0224 15:02:11.578018 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-ll6lf_cert-manager(d543e947-1fd6-4253-84c8-5dd81a835ba4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-ll6lf_cert-manager(d543e947-1fd6-4253-84c8-5dd81a835ba4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-ll6lf_cert-manager_d543e947-1fd6-4253-84c8-5dd81a835ba4_0(640bab1a7edd958f96d92a950070914d99235955c2b53af32d7d9a2f19965b06): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-ll6lf" podUID="d543e947-1fd6-4253-84c8-5dd81a835ba4" Feb 24 15:02:12 crc kubenswrapper[4982]: I0224 15:02:12.191704 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:12 crc kubenswrapper[4982]: I0224 15:02:12.191734 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:12 crc kubenswrapper[4982]: I0224 15:02:12.191719 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:12 crc kubenswrapper[4982]: I0224 15:02:12.192310 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:12 crc kubenswrapper[4982]: I0224 15:02:12.192336 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:12 crc kubenswrapper[4982]: I0224 15:02:12.192603 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.241714 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-ll6lf_cert-manager_d543e947-1fd6-4253-84c8-5dd81a835ba4_0(aee68d4263d221d0645878676b454b965d91769452eaf65a0a928398f5295bfd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.241781 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-ll6lf_cert-manager_d543e947-1fd6-4253-84c8-5dd81a835ba4_0(aee68d4263d221d0645878676b454b965d91769452eaf65a0a928398f5295bfd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.241802 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-ll6lf_cert-manager_d543e947-1fd6-4253-84c8-5dd81a835ba4_0(aee68d4263d221d0645878676b454b965d91769452eaf65a0a928398f5295bfd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.241846 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-ll6lf_cert-manager(d543e947-1fd6-4253-84c8-5dd81a835ba4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-ll6lf_cert-manager(d543e947-1fd6-4253-84c8-5dd81a835ba4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-ll6lf_cert-manager_d543e947-1fd6-4253-84c8-5dd81a835ba4_0(aee68d4263d221d0645878676b454b965d91769452eaf65a0a928398f5295bfd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-ll6lf" podUID="d543e947-1fd6-4253-84c8-5dd81a835ba4" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.255199 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager_906e9aec-8f03-4230-8b2b-01459a8c2fcc_0(a5d34aa62fc8700160145276a74f723ed63b9cc102fc1db94417498e618e3c63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.255253 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager_906e9aec-8f03-4230-8b2b-01459a8c2fcc_0(a5d34aa62fc8700160145276a74f723ed63b9cc102fc1db94417498e618e3c63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.255274 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager_906e9aec-8f03-4230-8b2b-01459a8c2fcc_0(a5d34aa62fc8700160145276a74f723ed63b9cc102fc1db94417498e618e3c63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.255312 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager(906e9aec-8f03-4230-8b2b-01459a8c2fcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager(906e9aec-8f03-4230-8b2b-01459a8c2fcc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-mpsc5_cert-manager_906e9aec-8f03-4230-8b2b-01459a8c2fcc_0(a5d34aa62fc8700160145276a74f723ed63b9cc102fc1db94417498e618e3c63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" podUID="906e9aec-8f03-4230-8b2b-01459a8c2fcc" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.257667 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-4zfz6_cert-manager_8422a1e6-92b5-4b34-a360-004609a25ac0_0(d547f1f950d9440a000421112cfef4fef3cc1b719532f4a07ca28a44bb2c9643): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.257719 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-4zfz6_cert-manager_8422a1e6-92b5-4b34-a360-004609a25ac0_0(d547f1f950d9440a000421112cfef4fef3cc1b719532f4a07ca28a44bb2c9643): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.257738 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-4zfz6_cert-manager_8422a1e6-92b5-4b34-a360-004609a25ac0_0(d547f1f950d9440a000421112cfef4fef3cc1b719532f4a07ca28a44bb2c9643): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:12 crc kubenswrapper[4982]: E0224 15:02:12.257774 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-4zfz6_cert-manager(8422a1e6-92b5-4b34-a360-004609a25ac0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-4zfz6_cert-manager(8422a1e6-92b5-4b34-a360-004609a25ac0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-4zfz6_cert-manager_8422a1e6-92b5-4b34-a360-004609a25ac0_0(d547f1f950d9440a000421112cfef4fef3cc1b719532f4a07ca28a44bb2c9643): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" podUID="8422a1e6-92b5-4b34-a360-004609a25ac0" Feb 24 15:02:21 crc kubenswrapper[4982]: I0224 15:02:21.145723 4982 scope.go:117] "RemoveContainer" containerID="6433821ead0065df5901d646c534d5091e0f54c83e36192db68997a515b90593" Feb 24 15:02:22 crc kubenswrapper[4982]: I0224 15:02:22.262001 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jgtdj_86687a8a-6996-44fa-a62e-b43266c31922/kube-multus/2.log" Feb 24 15:02:22 crc kubenswrapper[4982]: I0224 15:02:22.262471 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jgtdj" event={"ID":"86687a8a-6996-44fa-a62e-b43266c31922","Type":"ContainerStarted","Data":"b2d1e88bf4652f81923ab536db1c7d579eccb36cdd8f7b98eedf31ba58a91854"} Feb 24 15:02:23 crc kubenswrapper[4982]: I0224 15:02:23.144815 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:23 crc kubenswrapper[4982]: I0224 15:02:23.144841 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:23 crc kubenswrapper[4982]: I0224 15:02:23.145574 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" Feb 24 15:02:23 crc kubenswrapper[4982]: I0224 15:02:23.145687 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:23 crc kubenswrapper[4982]: I0224 15:02:23.407879 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532422-znpkl"] Feb 24 15:02:23 crc kubenswrapper[4982]: W0224 15:02:23.436265 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906e9aec_8f03_4230_8b2b_01459a8c2fcc.slice/crio-227fb2b2cd30626d6ab83e476e097129c278058afa39c4767b67f2357f9cb0c0 WatchSource:0}: Error finding container 227fb2b2cd30626d6ab83e476e097129c278058afa39c4767b67f2357f9cb0c0: Status 404 returned error can't find the container with id 227fb2b2cd30626d6ab83e476e097129c278058afa39c4767b67f2357f9cb0c0 Feb 24 15:02:23 crc kubenswrapper[4982]: I0224 15:02:23.444222 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5"] Feb 24 15:02:24 crc kubenswrapper[4982]: I0224 15:02:24.145005 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:24 crc kubenswrapper[4982]: I0224 15:02:24.147036 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:24 crc kubenswrapper[4982]: I0224 15:02:24.286669 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532422-znpkl" event={"ID":"f18680cf-a054-40f5-af8d-54eec7b94616","Type":"ContainerStarted","Data":"d93af60e429fa8ed2eff01f0dc0d494722fcf605022351e17f8d624716156f26"} Feb 24 15:02:24 crc kubenswrapper[4982]: I0224 15:02:24.288547 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" event={"ID":"906e9aec-8f03-4230-8b2b-01459a8c2fcc","Type":"ContainerStarted","Data":"227fb2b2cd30626d6ab83e476e097129c278058afa39c4767b67f2357f9cb0c0"} Feb 24 15:02:24 crc kubenswrapper[4982]: I0224 15:02:24.491070 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4zfz6"] Feb 24 15:02:25 crc kubenswrapper[4982]: I0224 15:02:25.149287 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:25 crc kubenswrapper[4982]: I0224 15:02:25.150446 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-ll6lf" Feb 24 15:02:25 crc kubenswrapper[4982]: I0224 15:02:25.297907 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" event={"ID":"8422a1e6-92b5-4b34-a360-004609a25ac0","Type":"ContainerStarted","Data":"3baee85620b9ed1ffea2a57d34666ec45e3f82ece855d8b06ff9a025335cb5bc"} Feb 24 15:02:25 crc kubenswrapper[4982]: I0224 15:02:25.771289 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-ll6lf"] Feb 24 15:02:25 crc kubenswrapper[4982]: W0224 15:02:25.908151 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd543e947_1fd6_4253_84c8_5dd81a835ba4.slice/crio-7086da4a86d064435d83012edab4248856ca4daee2a87c75083e8c1983d7f69c WatchSource:0}: Error finding container 7086da4a86d064435d83012edab4248856ca4daee2a87c75083e8c1983d7f69c: Status 404 returned error can't find the container with id 7086da4a86d064435d83012edab4248856ca4daee2a87c75083e8c1983d7f69c Feb 24 15:02:26 crc kubenswrapper[4982]: I0224 15:02:26.311691 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" event={"ID":"906e9aec-8f03-4230-8b2b-01459a8c2fcc","Type":"ContainerStarted","Data":"f4ff4280c16e2df8e4ec44e6508a10b2bac5e7db61bcf2f5d9eef7e4ab40d5a8"} Feb 24 15:02:26 crc kubenswrapper[4982]: I0224 15:02:26.312972 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ll6lf" event={"ID":"d543e947-1fd6-4253-84c8-5dd81a835ba4","Type":"ContainerStarted","Data":"7086da4a86d064435d83012edab4248856ca4daee2a87c75083e8c1983d7f69c"} Feb 24 15:02:26 crc kubenswrapper[4982]: I0224 15:02:26.335987 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mpsc5" podStartSLOduration=13.686680901999999 podStartE2EDuration="16.335947083s" podCreationTimestamp="2026-02-24 15:02:10 +0000 UTC" firstStartedPulling="2026-02-24 15:02:23.443291936 +0000 UTC m=+805.062350429" lastFinishedPulling="2026-02-24 15:02:26.092558087 +0000 UTC m=+807.711616610" observedRunningTime="2026-02-24 15:02:26.328571985 +0000 UTC m=+807.947630478" watchObservedRunningTime="2026-02-24 15:02:26.335947083 +0000 UTC m=+807.955005616" Feb 24 15:02:27 crc kubenswrapper[4982]: I0224 15:02:27.323844 4982 generic.go:334] "Generic (PLEG): container finished" podID="f18680cf-a054-40f5-af8d-54eec7b94616" containerID="f285a4dff2586b06d32f60b93dde901764d4e4cfc073c0c8d7b45c5826a87e5a" exitCode=0 Feb 24 15:02:27 crc kubenswrapper[4982]: I0224 15:02:27.323943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532422-znpkl" event={"ID":"f18680cf-a054-40f5-af8d-54eec7b94616","Type":"ContainerDied","Data":"f285a4dff2586b06d32f60b93dde901764d4e4cfc073c0c8d7b45c5826a87e5a"} Feb 24 15:02:28 crc kubenswrapper[4982]: I0224 15:02:28.335042 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" event={"ID":"8422a1e6-92b5-4b34-a360-004609a25ac0","Type":"ContainerStarted","Data":"144fda0ce7a18581add6dd938eccad3d33f421f96cfe448ca6e14b050d9b4937"} Feb 24 15:02:28 crc kubenswrapper[4982]: I0224 15:02:28.363765 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" podStartSLOduration=15.448613671 podStartE2EDuration="18.363719753s" podCreationTimestamp="2026-02-24 15:02:10 +0000 UTC" firstStartedPulling="2026-02-24 15:02:24.691180529 +0000 UTC m=+806.310239032" lastFinishedPulling="2026-02-24 15:02:27.606286591 +0000 UTC m=+809.225345114" observedRunningTime="2026-02-24 15:02:28.361210265 +0000 UTC m=+809.980268798" watchObservedRunningTime="2026-02-24 15:02:28.363719753 +0000 UTC m=+809.982778276" Feb 24 15:02:28 crc kubenswrapper[4982]: I0224 15:02:28.661227 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:28 crc kubenswrapper[4982]: I0224 15:02:28.818572 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp7p5\" (UniqueName: \"kubernetes.io/projected/f18680cf-a054-40f5-af8d-54eec7b94616-kube-api-access-xp7p5\") pod \"f18680cf-a054-40f5-af8d-54eec7b94616\" (UID: \"f18680cf-a054-40f5-af8d-54eec7b94616\") " Feb 24 15:02:28 crc kubenswrapper[4982]: I0224 15:02:28.830609 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18680cf-a054-40f5-af8d-54eec7b94616-kube-api-access-xp7p5" (OuterVolumeSpecName: "kube-api-access-xp7p5") pod "f18680cf-a054-40f5-af8d-54eec7b94616" (UID: "f18680cf-a054-40f5-af8d-54eec7b94616"). InnerVolumeSpecName "kube-api-access-xp7p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:02:28 crc kubenswrapper[4982]: I0224 15:02:28.920884 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp7p5\" (UniqueName: \"kubernetes.io/projected/f18680cf-a054-40f5-af8d-54eec7b94616-kube-api-access-xp7p5\") on node \"crc\" DevicePath \"\"" Feb 24 15:02:29 crc kubenswrapper[4982]: I0224 15:02:29.351574 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532422-znpkl" event={"ID":"f18680cf-a054-40f5-af8d-54eec7b94616","Type":"ContainerDied","Data":"d93af60e429fa8ed2eff01f0dc0d494722fcf605022351e17f8d624716156f26"} Feb 24 15:02:29 crc kubenswrapper[4982]: I0224 15:02:29.352376 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93af60e429fa8ed2eff01f0dc0d494722fcf605022351e17f8d624716156f26" Feb 24 15:02:29 crc kubenswrapper[4982]: I0224 15:02:29.352424 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:29 crc kubenswrapper[4982]: I0224 15:02:29.351621 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532422-znpkl" Feb 24 15:02:29 crc kubenswrapper[4982]: I0224 15:02:29.712411 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532416-bztmv"] Feb 24 15:02:29 crc kubenswrapper[4982]: I0224 15:02:29.715892 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532416-bztmv"] Feb 24 15:02:31 crc kubenswrapper[4982]: I0224 15:02:31.158950 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd23d9a-2f6f-4b85-beeb-514cf322d5ba" path="/var/lib/kubelet/pods/0fd23d9a-2f6f-4b85-beeb-514cf322d5ba/volumes" Feb 24 15:02:31 crc kubenswrapper[4982]: I0224 15:02:31.371203 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-ll6lf" event={"ID":"d543e947-1fd6-4253-84c8-5dd81a835ba4","Type":"ContainerStarted","Data":"5e5ec8b7c6c3bdd42b366177e1e6feb3866c94b1d01861de3acb6b0a8773de46"} Feb 24 15:02:31 crc kubenswrapper[4982]: I0224 15:02:31.399842 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-ll6lf" podStartSLOduration=17.121694519 podStartE2EDuration="21.399819485s" podCreationTimestamp="2026-02-24 15:02:10 +0000 UTC" firstStartedPulling="2026-02-24 15:02:25.911531591 +0000 UTC m=+807.530590104" lastFinishedPulling="2026-02-24 15:02:30.189656567 +0000 UTC m=+811.808715070" observedRunningTime="2026-02-24 15:02:31.393379581 +0000 UTC m=+813.012438074" watchObservedRunningTime="2026-02-24 15:02:31.399819485 +0000 UTC m=+813.018877968" Feb 24 15:02:32 crc kubenswrapper[4982]: I0224 15:02:32.770880 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dbnnq" Feb 24 15:02:36 crc kubenswrapper[4982]: I0224 15:02:36.334028 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4zfz6" Feb 24 15:02:39 crc kubenswrapper[4982]: I0224 15:02:39.054399 4982 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.838615 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp"] Feb 24 15:03:02 crc kubenswrapper[4982]: E0224 15:03:02.842278 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18680cf-a054-40f5-af8d-54eec7b94616" containerName="oc" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.842308 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18680cf-a054-40f5-af8d-54eec7b94616" containerName="oc" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.842593 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18680cf-a054-40f5-af8d-54eec7b94616" containerName="oc" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.844157 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.850724 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.873999 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp"] Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.911765 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.911807 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.911868 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tflnq\" (UniqueName: \"kubernetes.io/projected/11c92c38-3fb2-443e-bcfe-887679226802-kube-api-access-tflnq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.950185 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q"] Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.960002 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q"] Feb 24 15:03:02 crc kubenswrapper[4982]: I0224 15:03:02.960126 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.012545 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tflnq\" (UniqueName: \"kubernetes.io/projected/11c92c38-3fb2-443e-bcfe-887679226802-kube-api-access-tflnq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.012591 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxg7b\" (UniqueName: \"kubernetes.io/projected/59aa3bc9-c53a-4844-8572-79a2dc711e95-kube-api-access-bxg7b\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.012631 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.012818 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.012916 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.012990 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.013311 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.013583 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.031183 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tflnq\" (UniqueName: \"kubernetes.io/projected/11c92c38-3fb2-443e-bcfe-887679226802-kube-api-access-tflnq\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.113958 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.114062 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxg7b\" (UniqueName: \"kubernetes.io/projected/59aa3bc9-c53a-4844-8572-79a2dc711e95-kube-api-access-bxg7b\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.114115 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.114671 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.114781 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.131698 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxg7b\" (UniqueName: \"kubernetes.io/projected/59aa3bc9-c53a-4844-8572-79a2dc711e95-kube-api-access-bxg7b\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.177734 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.282600 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.477744 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q"] Feb 24 15:03:03 crc kubenswrapper[4982]: W0224 15:03:03.487750 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59aa3bc9_c53a_4844_8572_79a2dc711e95.slice/crio-03180c0ee159ca675a547866f0868c7e6d62236639f76afefc36b9e3080532d1 WatchSource:0}: Error finding container 03180c0ee159ca675a547866f0868c7e6d62236639f76afefc36b9e3080532d1: Status 404 returned error can't find the container with id 03180c0ee159ca675a547866f0868c7e6d62236639f76afefc36b9e3080532d1 Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.667716 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp"] Feb 24 15:03:03 crc kubenswrapper[4982]: W0224 15:03:03.675473 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11c92c38_3fb2_443e_bcfe_887679226802.slice/crio-5065b098489ee46cea40468772ab796c7d2b9263a9e430f3a539471d750ad267 WatchSource:0}: Error finding container 5065b098489ee46cea40468772ab796c7d2b9263a9e430f3a539471d750ad267: Status 404 returned error can't find the container with id 5065b098489ee46cea40468772ab796c7d2b9263a9e430f3a539471d750ad267 Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.994595 4982 generic.go:334] "Generic (PLEG): container finished" podID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerID="6fecb4937e81bd5ce710b76e1439b06da1aa95ac0a356f40adaa9a3f90a150cc" exitCode=0 Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.994716 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" event={"ID":"59aa3bc9-c53a-4844-8572-79a2dc711e95","Type":"ContainerDied","Data":"6fecb4937e81bd5ce710b76e1439b06da1aa95ac0a356f40adaa9a3f90a150cc"} Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.994793 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" event={"ID":"59aa3bc9-c53a-4844-8572-79a2dc711e95","Type":"ContainerStarted","Data":"03180c0ee159ca675a547866f0868c7e6d62236639f76afefc36b9e3080532d1"} Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.997881 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.998572 4982 generic.go:334] "Generic (PLEG): container finished" podID="11c92c38-3fb2-443e-bcfe-887679226802" containerID="33548449a8dccaef41cb0a4094d65a68d07eb376770db988072ed56250550db7" exitCode=0 Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.998652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" event={"ID":"11c92c38-3fb2-443e-bcfe-887679226802","Type":"ContainerDied","Data":"33548449a8dccaef41cb0a4094d65a68d07eb376770db988072ed56250550db7"} Feb 24 15:03:03 crc kubenswrapper[4982]: I0224 15:03:03.998703 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" event={"ID":"11c92c38-3fb2-443e-bcfe-887679226802","Type":"ContainerStarted","Data":"5065b098489ee46cea40468772ab796c7d2b9263a9e430f3a539471d750ad267"} Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.020966 4982 generic.go:334] "Generic (PLEG): container finished" podID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerID="65b7737401aa08e40a96c49f1cfaa638f223186609a22781d8aafafc5287fbae" exitCode=0 Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.022012 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" event={"ID":"59aa3bc9-c53a-4844-8572-79a2dc711e95","Type":"ContainerDied","Data":"65b7737401aa08e40a96c49f1cfaa638f223186609a22781d8aafafc5287fbae"} Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.027677 4982 generic.go:334] "Generic (PLEG): container finished" podID="11c92c38-3fb2-443e-bcfe-887679226802" containerID="99fa4fc9507c91a1cf85cfb0dd492412fb5510482a4d9db72cba7c8ad394c61d" exitCode=0 Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.027742 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" event={"ID":"11c92c38-3fb2-443e-bcfe-887679226802","Type":"ContainerDied","Data":"99fa4fc9507c91a1cf85cfb0dd492412fb5510482a4d9db72cba7c8ad394c61d"} Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.514421 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-49qct"] Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.515804 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.538239 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49qct"] Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.673120 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-utilities\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.673161 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xdnf\" (UniqueName: \"kubernetes.io/projected/d489b979-cda7-4b0c-9536-c3f946bd3b88-kube-api-access-9xdnf\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.673215 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-catalog-content\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.774876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-utilities\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.774947 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xdnf\" (UniqueName: \"kubernetes.io/projected/d489b979-cda7-4b0c-9536-c3f946bd3b88-kube-api-access-9xdnf\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.775095 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-catalog-content\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.775460 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-utilities\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.775666 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-catalog-content\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.798286 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xdnf\" (UniqueName: \"kubernetes.io/projected/d489b979-cda7-4b0c-9536-c3f946bd3b88-kube-api-access-9xdnf\") pod \"redhat-operators-49qct\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:06 crc kubenswrapper[4982]: I0224 15:03:06.842969 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:07 crc kubenswrapper[4982]: I0224 15:03:07.054230 4982 generic.go:334] "Generic (PLEG): container finished" podID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerID="4348d836ab5eccc264652c50f80756355f441c9a45f8ca32206ee4bda95c4783" exitCode=0 Feb 24 15:03:07 crc kubenswrapper[4982]: I0224 15:03:07.054612 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" event={"ID":"59aa3bc9-c53a-4844-8572-79a2dc711e95","Type":"ContainerDied","Data":"4348d836ab5eccc264652c50f80756355f441c9a45f8ca32206ee4bda95c4783"} Feb 24 15:03:07 crc kubenswrapper[4982]: I0224 15:03:07.056052 4982 generic.go:334] "Generic (PLEG): container finished" podID="11c92c38-3fb2-443e-bcfe-887679226802" containerID="4f8c1f8867dbc83562da6ba178a00abac0046e0d714e75765048b132fff4071c" exitCode=0 Feb 24 15:03:07 crc kubenswrapper[4982]: I0224 15:03:07.056075 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" event={"ID":"11c92c38-3fb2-443e-bcfe-887679226802","Type":"ContainerDied","Data":"4f8c1f8867dbc83562da6ba178a00abac0046e0d714e75765048b132fff4071c"} Feb 24 15:03:07 crc kubenswrapper[4982]: I0224 15:03:07.108689 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49qct"] Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.063725 4982 generic.go:334] "Generic (PLEG): container finished" podID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerID="d8b855756541e51fd1254fec4cb690b849f5b637f27ce4eec53c537a2ae4f613" exitCode=0 Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.063838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49qct" event={"ID":"d489b979-cda7-4b0c-9536-c3f946bd3b88","Type":"ContainerDied","Data":"d8b855756541e51fd1254fec4cb690b849f5b637f27ce4eec53c537a2ae4f613"} Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.064226 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49qct" event={"ID":"d489b979-cda7-4b0c-9536-c3f946bd3b88","Type":"ContainerStarted","Data":"80628888bbc08082755cc9167f1a3ab3920b98a98ffa49ff367780ab71cdc4d6"} Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.342416 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.406393 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-util\") pod \"11c92c38-3fb2-443e-bcfe-887679226802\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.406565 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tflnq\" (UniqueName: \"kubernetes.io/projected/11c92c38-3fb2-443e-bcfe-887679226802-kube-api-access-tflnq\") pod \"11c92c38-3fb2-443e-bcfe-887679226802\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.406600 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-bundle\") pod \"11c92c38-3fb2-443e-bcfe-887679226802\" (UID: \"11c92c38-3fb2-443e-bcfe-887679226802\") " Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.407737 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-bundle" (OuterVolumeSpecName: "bundle") pod "11c92c38-3fb2-443e-bcfe-887679226802" (UID: "11c92c38-3fb2-443e-bcfe-887679226802"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.411739 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c92c38-3fb2-443e-bcfe-887679226802-kube-api-access-tflnq" (OuterVolumeSpecName: "kube-api-access-tflnq") pod "11c92c38-3fb2-443e-bcfe-887679226802" (UID: "11c92c38-3fb2-443e-bcfe-887679226802"). InnerVolumeSpecName "kube-api-access-tflnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.420065 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-util" (OuterVolumeSpecName: "util") pod "11c92c38-3fb2-443e-bcfe-887679226802" (UID: "11c92c38-3fb2-443e-bcfe-887679226802"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.461185 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.508894 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-util\") pod \"59aa3bc9-c53a-4844-8572-79a2dc711e95\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.508978 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxg7b\" (UniqueName: \"kubernetes.io/projected/59aa3bc9-c53a-4844-8572-79a2dc711e95-kube-api-access-bxg7b\") pod \"59aa3bc9-c53a-4844-8572-79a2dc711e95\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.509004 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-bundle\") pod \"59aa3bc9-c53a-4844-8572-79a2dc711e95\" (UID: \"59aa3bc9-c53a-4844-8572-79a2dc711e95\") " Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.509178 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tflnq\" (UniqueName: \"kubernetes.io/projected/11c92c38-3fb2-443e-bcfe-887679226802-kube-api-access-tflnq\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.509190 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.509197 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c92c38-3fb2-443e-bcfe-887679226802-util\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.509998 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-bundle" (OuterVolumeSpecName: "bundle") pod "59aa3bc9-c53a-4844-8572-79a2dc711e95" (UID: "59aa3bc9-c53a-4844-8572-79a2dc711e95"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.512444 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59aa3bc9-c53a-4844-8572-79a2dc711e95-kube-api-access-bxg7b" (OuterVolumeSpecName: "kube-api-access-bxg7b") pod "59aa3bc9-c53a-4844-8572-79a2dc711e95" (UID: "59aa3bc9-c53a-4844-8572-79a2dc711e95"). InnerVolumeSpecName "kube-api-access-bxg7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.522568 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-util" (OuterVolumeSpecName: "util") pod "59aa3bc9-c53a-4844-8572-79a2dc711e95" (UID: "59aa3bc9-c53a-4844-8572-79a2dc711e95"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.611684 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-util\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.611746 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxg7b\" (UniqueName: \"kubernetes.io/projected/59aa3bc9-c53a-4844-8572-79a2dc711e95-kube-api-access-bxg7b\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.611766 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59aa3bc9-c53a-4844-8572-79a2dc711e95-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.738038 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:03:08 crc kubenswrapper[4982]: I0224 15:03:08.738129 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:03:09 crc kubenswrapper[4982]: I0224 15:03:09.090335 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49qct" event={"ID":"d489b979-cda7-4b0c-9536-c3f946bd3b88","Type":"ContainerStarted","Data":"bea9c70fa8e823f4caa22c001c2be08a956516067eab2f3d66ab1131bd78f41a"} Feb 24 15:03:09 crc kubenswrapper[4982]: I0224 15:03:09.094640 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" event={"ID":"59aa3bc9-c53a-4844-8572-79a2dc711e95","Type":"ContainerDied","Data":"03180c0ee159ca675a547866f0868c7e6d62236639f76afefc36b9e3080532d1"} Feb 24 15:03:09 crc kubenswrapper[4982]: I0224 15:03:09.094682 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03180c0ee159ca675a547866f0868c7e6d62236639f76afefc36b9e3080532d1" Feb 24 15:03:09 crc kubenswrapper[4982]: I0224 15:03:09.094746 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q" Feb 24 15:03:09 crc kubenswrapper[4982]: I0224 15:03:09.103841 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" event={"ID":"11c92c38-3fb2-443e-bcfe-887679226802","Type":"ContainerDied","Data":"5065b098489ee46cea40468772ab796c7d2b9263a9e430f3a539471d750ad267"} Feb 24 15:03:09 crc kubenswrapper[4982]: I0224 15:03:09.103888 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5065b098489ee46cea40468772ab796c7d2b9263a9e430f3a539471d750ad267" Feb 24 15:03:09 crc kubenswrapper[4982]: I0224 15:03:09.103961 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp" Feb 24 15:03:10 crc kubenswrapper[4982]: I0224 15:03:10.118394 4982 generic.go:334] "Generic (PLEG): container finished" podID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerID="bea9c70fa8e823f4caa22c001c2be08a956516067eab2f3d66ab1131bd78f41a" exitCode=0 Feb 24 15:03:10 crc kubenswrapper[4982]: I0224 15:03:10.118471 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49qct" event={"ID":"d489b979-cda7-4b0c-9536-c3f946bd3b88","Type":"ContainerDied","Data":"bea9c70fa8e823f4caa22c001c2be08a956516067eab2f3d66ab1131bd78f41a"} Feb 24 15:03:11 crc kubenswrapper[4982]: I0224 15:03:11.127089 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49qct" event={"ID":"d489b979-cda7-4b0c-9536-c3f946bd3b88","Type":"ContainerStarted","Data":"20825ba44eaeb4d3d8f5b941b83ab520dab09fe3ffd75f852c6982a07476de65"} Feb 24 15:03:11 crc kubenswrapper[4982]: I0224 15:03:11.149816 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-49qct" podStartSLOduration=2.70198505 podStartE2EDuration="5.149798955s" podCreationTimestamp="2026-02-24 15:03:06 +0000 UTC" firstStartedPulling="2026-02-24 15:03:08.068766183 +0000 UTC m=+849.687824676" lastFinishedPulling="2026-02-24 15:03:10.516580058 +0000 UTC m=+852.135638581" observedRunningTime="2026-02-24 15:03:11.143994608 +0000 UTC m=+852.763053101" watchObservedRunningTime="2026-02-24 15:03:11.149798955 +0000 UTC m=+852.768857458" Feb 24 15:03:16 crc kubenswrapper[4982]: I0224 15:03:16.843921 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:16 crc kubenswrapper[4982]: I0224 15:03:16.844356 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:17 crc kubenswrapper[4982]: I0224 15:03:17.893161 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49qct" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="registry-server" probeResult="failure" output=< Feb 24 15:03:17 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:03:17 crc kubenswrapper[4982]: > Feb 24 15:03:19 crc kubenswrapper[4982]: I0224 15:03:19.820729 4982 scope.go:117] "RemoveContainer" containerID="c256484a9834d3a07f592fbf13e96fd6c8dc223bb16c915cc08d38ff3250bc82" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835324 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps"] Feb 24 15:03:20 crc kubenswrapper[4982]: E0224 15:03:20.835559 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerName="pull" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835571 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerName="pull" Feb 24 15:03:20 crc kubenswrapper[4982]: E0224 15:03:20.835588 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c92c38-3fb2-443e-bcfe-887679226802" containerName="util" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835594 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c92c38-3fb2-443e-bcfe-887679226802" containerName="util" Feb 24 15:03:20 crc kubenswrapper[4982]: E0224 15:03:20.835607 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerName="extract" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835613 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerName="extract" Feb 24 15:03:20 crc kubenswrapper[4982]: E0224 15:03:20.835622 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerName="util" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835628 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerName="util" Feb 24 15:03:20 crc kubenswrapper[4982]: E0224 15:03:20.835636 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c92c38-3fb2-443e-bcfe-887679226802" containerName="extract" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835642 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c92c38-3fb2-443e-bcfe-887679226802" containerName="extract" Feb 24 15:03:20 crc kubenswrapper[4982]: E0224 15:03:20.835653 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c92c38-3fb2-443e-bcfe-887679226802" containerName="pull" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835658 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c92c38-3fb2-443e-bcfe-887679226802" containerName="pull" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835757 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c92c38-3fb2-443e-bcfe-887679226802" containerName="extract" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.835773 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="59aa3bc9-c53a-4844-8572-79a2dc711e95" containerName="extract" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.836364 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.839147 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.841388 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-gbsm5" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.842050 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.842217 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.842418 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.842591 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.858904 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps"] Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.997434 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-manager-config\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.997511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6wf8\" (UniqueName: \"kubernetes.io/projected/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-kube-api-access-s6wf8\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.997566 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.997587 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-apiservice-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:20 crc kubenswrapper[4982]: I0224 15:03:20.997630 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-webhook-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.099257 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.099310 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-apiservice-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.099360 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-webhook-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.099386 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-manager-config\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.099432 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6wf8\" (UniqueName: \"kubernetes.io/projected/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-kube-api-access-s6wf8\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.101607 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-manager-config\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.109423 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-apiservice-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.118739 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.118745 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-webhook-cert\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.126154 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6wf8\" (UniqueName: \"kubernetes.io/projected/9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c-kube-api-access-s6wf8\") pod \"loki-operator-controller-manager-6d4467cd99-kv4ps\" (UID: \"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.159558 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.434729 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps"] Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.565326 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-d86g4"] Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.566159 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-d86g4" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.569063 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.569275 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-4rdmh" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.569410 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.589112 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-d86g4"] Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.705988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9hmc\" (UniqueName: \"kubernetes.io/projected/dd40714b-1f28-413d-bfec-b2c20b09e12f-kube-api-access-v9hmc\") pod \"cluster-logging-operator-c769fd969-d86g4\" (UID: \"dd40714b-1f28-413d-bfec-b2c20b09e12f\") " pod="openshift-logging/cluster-logging-operator-c769fd969-d86g4" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.806944 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9hmc\" (UniqueName: \"kubernetes.io/projected/dd40714b-1f28-413d-bfec-b2c20b09e12f-kube-api-access-v9hmc\") pod \"cluster-logging-operator-c769fd969-d86g4\" (UID: \"dd40714b-1f28-413d-bfec-b2c20b09e12f\") " pod="openshift-logging/cluster-logging-operator-c769fd969-d86g4" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.829469 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9hmc\" (UniqueName: \"kubernetes.io/projected/dd40714b-1f28-413d-bfec-b2c20b09e12f-kube-api-access-v9hmc\") pod \"cluster-logging-operator-c769fd969-d86g4\" (UID: \"dd40714b-1f28-413d-bfec-b2c20b09e12f\") " pod="openshift-logging/cluster-logging-operator-c769fd969-d86g4" Feb 24 15:03:21 crc kubenswrapper[4982]: I0224 15:03:21.890475 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-d86g4" Feb 24 15:03:22 crc kubenswrapper[4982]: I0224 15:03:22.164959 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-d86g4"] Feb 24 15:03:22 crc kubenswrapper[4982]: W0224 15:03:22.181772 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd40714b_1f28_413d_bfec_b2c20b09e12f.slice/crio-39b9dd4a9b8ea69125ac40853f5e3ddc3058f42dd09f40142b19dcd2f9b52f3d WatchSource:0}: Error finding container 39b9dd4a9b8ea69125ac40853f5e3ddc3058f42dd09f40142b19dcd2f9b52f3d: Status 404 returned error can't find the container with id 39b9dd4a9b8ea69125ac40853f5e3ddc3058f42dd09f40142b19dcd2f9b52f3d Feb 24 15:03:22 crc kubenswrapper[4982]: I0224 15:03:22.210094 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" event={"ID":"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c","Type":"ContainerStarted","Data":"b0297ecdbcb418e2b511e9530702554f3802cbd767b9a7bb2d12a2c3d2e45298"} Feb 24 15:03:22 crc kubenswrapper[4982]: I0224 15:03:22.211254 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-d86g4" event={"ID":"dd40714b-1f28-413d-bfec-b2c20b09e12f","Type":"ContainerStarted","Data":"39b9dd4a9b8ea69125ac40853f5e3ddc3058f42dd09f40142b19dcd2f9b52f3d"} Feb 24 15:03:26 crc kubenswrapper[4982]: I0224 15:03:26.900158 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:26 crc kubenswrapper[4982]: I0224 15:03:26.944718 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:29 crc kubenswrapper[4982]: I0224 15:03:29.901665 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49qct"] Feb 24 15:03:29 crc kubenswrapper[4982]: I0224 15:03:29.902214 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-49qct" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="registry-server" containerID="cri-o://20825ba44eaeb4d3d8f5b941b83ab520dab09fe3ffd75f852c6982a07476de65" gracePeriod=2 Feb 24 15:03:30 crc kubenswrapper[4982]: I0224 15:03:30.283295 4982 generic.go:334] "Generic (PLEG): container finished" podID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerID="20825ba44eaeb4d3d8f5b941b83ab520dab09fe3ffd75f852c6982a07476de65" exitCode=0 Feb 24 15:03:30 crc kubenswrapper[4982]: I0224 15:03:30.283397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49qct" event={"ID":"d489b979-cda7-4b0c-9536-c3f946bd3b88","Type":"ContainerDied","Data":"20825ba44eaeb4d3d8f5b941b83ab520dab09fe3ffd75f852c6982a07476de65"} Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.575431 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.662638 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xdnf\" (UniqueName: \"kubernetes.io/projected/d489b979-cda7-4b0c-9536-c3f946bd3b88-kube-api-access-9xdnf\") pod \"d489b979-cda7-4b0c-9536-c3f946bd3b88\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.662862 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-catalog-content\") pod \"d489b979-cda7-4b0c-9536-c3f946bd3b88\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.662907 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-utilities\") pod \"d489b979-cda7-4b0c-9536-c3f946bd3b88\" (UID: \"d489b979-cda7-4b0c-9536-c3f946bd3b88\") " Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.664009 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-utilities" (OuterVolumeSpecName: "utilities") pod "d489b979-cda7-4b0c-9536-c3f946bd3b88" (UID: "d489b979-cda7-4b0c-9536-c3f946bd3b88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.672276 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d489b979-cda7-4b0c-9536-c3f946bd3b88-kube-api-access-9xdnf" (OuterVolumeSpecName: "kube-api-access-9xdnf") pod "d489b979-cda7-4b0c-9536-c3f946bd3b88" (UID: "d489b979-cda7-4b0c-9536-c3f946bd3b88"). InnerVolumeSpecName "kube-api-access-9xdnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.765032 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.765071 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xdnf\" (UniqueName: \"kubernetes.io/projected/d489b979-cda7-4b0c-9536-c3f946bd3b88-kube-api-access-9xdnf\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.784720 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d489b979-cda7-4b0c-9536-c3f946bd3b88" (UID: "d489b979-cda7-4b0c-9536-c3f946bd3b88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:03:31 crc kubenswrapper[4982]: I0224 15:03:31.866222 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d489b979-cda7-4b0c-9536-c3f946bd3b88-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:03:32 crc kubenswrapper[4982]: I0224 15:03:32.320309 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49qct" event={"ID":"d489b979-cda7-4b0c-9536-c3f946bd3b88","Type":"ContainerDied","Data":"80628888bbc08082755cc9167f1a3ab3920b98a98ffa49ff367780ab71cdc4d6"} Feb 24 15:03:32 crc kubenswrapper[4982]: I0224 15:03:32.320693 4982 scope.go:117] "RemoveContainer" containerID="20825ba44eaeb4d3d8f5b941b83ab520dab09fe3ffd75f852c6982a07476de65" Feb 24 15:03:32 crc kubenswrapper[4982]: I0224 15:03:32.320418 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49qct" Feb 24 15:03:32 crc kubenswrapper[4982]: I0224 15:03:32.357616 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49qct"] Feb 24 15:03:32 crc kubenswrapper[4982]: I0224 15:03:32.368780 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-49qct"] Feb 24 15:03:33 crc kubenswrapper[4982]: I0224 15:03:33.153713 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" path="/var/lib/kubelet/pods/d489b979-cda7-4b0c-9536-c3f946bd3b88/volumes" Feb 24 15:03:33 crc kubenswrapper[4982]: I0224 15:03:33.691337 4982 scope.go:117] "RemoveContainer" containerID="bea9c70fa8e823f4caa22c001c2be08a956516067eab2f3d66ab1131bd78f41a" Feb 24 15:03:33 crc kubenswrapper[4982]: I0224 15:03:33.734188 4982 scope.go:117] "RemoveContainer" containerID="d8b855756541e51fd1254fec4cb690b849f5b637f27ce4eec53c537a2ae4f613" Feb 24 15:03:34 crc kubenswrapper[4982]: I0224 15:03:34.341304 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" event={"ID":"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c","Type":"ContainerStarted","Data":"2539f741caa2cb658cf053adf99c1b80e2eb68da098463ae712ca818cd24230f"} Feb 24 15:03:34 crc kubenswrapper[4982]: I0224 15:03:34.342781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-d86g4" event={"ID":"dd40714b-1f28-413d-bfec-b2c20b09e12f","Type":"ContainerStarted","Data":"6763494addf39a1e6d531a914440b3317632f55f2d0687abc085b43b98deea0f"} Feb 24 15:03:34 crc kubenswrapper[4982]: I0224 15:03:34.366327 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-d86g4" podStartSLOduration=1.8147308340000001 podStartE2EDuration="13.366309638s" podCreationTimestamp="2026-02-24 15:03:21 +0000 UTC" firstStartedPulling="2026-02-24 15:03:22.185116174 +0000 UTC m=+863.804174667" lastFinishedPulling="2026-02-24 15:03:33.736694978 +0000 UTC m=+875.355753471" observedRunningTime="2026-02-24 15:03:34.35973023 +0000 UTC m=+875.978788723" watchObservedRunningTime="2026-02-24 15:03:34.366309638 +0000 UTC m=+875.985368131" Feb 24 15:03:38 crc kubenswrapper[4982]: I0224 15:03:38.737779 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:03:38 crc kubenswrapper[4982]: I0224 15:03:38.738025 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:03:40 crc kubenswrapper[4982]: I0224 15:03:40.390890 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" event={"ID":"9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c","Type":"ContainerStarted","Data":"cf0c58fe69ce98b2a772a521790f0de02f888d30e710e777152f4df8b9e16221"} Feb 24 15:03:40 crc kubenswrapper[4982]: I0224 15:03:40.391554 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:40 crc kubenswrapper[4982]: I0224 15:03:40.395969 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" Feb 24 15:03:40 crc kubenswrapper[4982]: I0224 15:03:40.431017 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6d4467cd99-kv4ps" podStartSLOduration=2.287289185 podStartE2EDuration="20.430993237s" podCreationTimestamp="2026-02-24 15:03:20 +0000 UTC" firstStartedPulling="2026-02-24 15:03:21.4417574 +0000 UTC m=+863.060815893" lastFinishedPulling="2026-02-24 15:03:39.585461452 +0000 UTC m=+881.204519945" observedRunningTime="2026-02-24 15:03:40.424656716 +0000 UTC m=+882.043715249" watchObservedRunningTime="2026-02-24 15:03:40.430993237 +0000 UTC m=+882.050051760" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.963456 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 24 15:03:43 crc kubenswrapper[4982]: E0224 15:03:43.964022 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="registry-server" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.964034 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="registry-server" Feb 24 15:03:43 crc kubenswrapper[4982]: E0224 15:03:43.964044 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="extract-content" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.964049 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="extract-content" Feb 24 15:03:43 crc kubenswrapper[4982]: E0224 15:03:43.964070 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="extract-utilities" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.964076 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="extract-utilities" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.964203 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d489b979-cda7-4b0c-9536-c3f946bd3b88" containerName="registry-server" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.964638 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.966587 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.967269 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 24 15:03:43 crc kubenswrapper[4982]: I0224 15:03:43.973469 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.047666 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57de5daa-6b16-4a04-ae90-72d7540d0c55\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57de5daa-6b16-4a04-ae90-72d7540d0c55\") pod \"minio\" (UID: \"e27e08d7-856a-4c2d-8ce1-46722c79dd03\") " pod="minio-dev/minio" Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.047754 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcxw\" (UniqueName: \"kubernetes.io/projected/e27e08d7-856a-4c2d-8ce1-46722c79dd03-kube-api-access-tlcxw\") pod \"minio\" (UID: \"e27e08d7-856a-4c2d-8ce1-46722c79dd03\") " pod="minio-dev/minio" Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.149520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57de5daa-6b16-4a04-ae90-72d7540d0c55\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57de5daa-6b16-4a04-ae90-72d7540d0c55\") pod \"minio\" (UID: \"e27e08d7-856a-4c2d-8ce1-46722c79dd03\") " pod="minio-dev/minio" Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.149596 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlcxw\" (UniqueName: \"kubernetes.io/projected/e27e08d7-856a-4c2d-8ce1-46722c79dd03-kube-api-access-tlcxw\") pod \"minio\" (UID: \"e27e08d7-856a-4c2d-8ce1-46722c79dd03\") " pod="minio-dev/minio" Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.152893 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.152965 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57de5daa-6b16-4a04-ae90-72d7540d0c55\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57de5daa-6b16-4a04-ae90-72d7540d0c55\") pod \"minio\" (UID: \"e27e08d7-856a-4c2d-8ce1-46722c79dd03\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e731757044b01875444d56d628cb7be0c805c4c633cce28ff0b4c484df5130a4/globalmount\"" pod="minio-dev/minio" Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.173125 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlcxw\" (UniqueName: \"kubernetes.io/projected/e27e08d7-856a-4c2d-8ce1-46722c79dd03-kube-api-access-tlcxw\") pod \"minio\" (UID: \"e27e08d7-856a-4c2d-8ce1-46722c79dd03\") " pod="minio-dev/minio" Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.196736 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57de5daa-6b16-4a04-ae90-72d7540d0c55\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57de5daa-6b16-4a04-ae90-72d7540d0c55\") pod \"minio\" (UID: \"e27e08d7-856a-4c2d-8ce1-46722c79dd03\") " pod="minio-dev/minio" Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.326437 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 24 15:03:44 crc kubenswrapper[4982]: I0224 15:03:44.532352 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 24 15:03:44 crc kubenswrapper[4982]: W0224 15:03:44.535559 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode27e08d7_856a_4c2d_8ce1_46722c79dd03.slice/crio-6d67d54dbc232ae35f8618ef77d5d7fb3b5afcfe0a68cc227e7bb9decba2483c WatchSource:0}: Error finding container 6d67d54dbc232ae35f8618ef77d5d7fb3b5afcfe0a68cc227e7bb9decba2483c: Status 404 returned error can't find the container with id 6d67d54dbc232ae35f8618ef77d5d7fb3b5afcfe0a68cc227e7bb9decba2483c Feb 24 15:03:45 crc kubenswrapper[4982]: I0224 15:03:45.433452 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"e27e08d7-856a-4c2d-8ce1-46722c79dd03","Type":"ContainerStarted","Data":"6d67d54dbc232ae35f8618ef77d5d7fb3b5afcfe0a68cc227e7bb9decba2483c"} Feb 24 15:03:48 crc kubenswrapper[4982]: I0224 15:03:48.468391 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"e27e08d7-856a-4c2d-8ce1-46722c79dd03","Type":"ContainerStarted","Data":"78c3e474661c6ced4e4ba3c88fb8a2d89473c0da9e14c592a07eee52b62fcc2b"} Feb 24 15:03:48 crc kubenswrapper[4982]: I0224 15:03:48.497895 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.2826815830000005 podStartE2EDuration="7.497866657s" podCreationTimestamp="2026-02-24 15:03:41 +0000 UTC" firstStartedPulling="2026-02-24 15:03:44.538526848 +0000 UTC m=+886.157585341" lastFinishedPulling="2026-02-24 15:03:47.753711922 +0000 UTC m=+889.372770415" observedRunningTime="2026-02-24 15:03:48.489106202 +0000 UTC m=+890.108164735" watchObservedRunningTime="2026-02-24 15:03:48.497866657 +0000 UTC m=+890.116925190" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.159158 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.165283 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.171119 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.171415 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-d4drp" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.171667 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.171799 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.171929 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.173343 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.262450 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.262583 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.262627 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-config\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.262650 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8md\" (UniqueName: \"kubernetes.io/projected/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-kube-api-access-dd8md\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.262755 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.284360 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-cldqm"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.285478 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.287315 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.287569 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.287699 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.306351 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-cldqm"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.343955 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.344818 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.353838 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.354037 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.358709 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.364854 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.364940 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.364980 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-config\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.365009 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8md\" (UniqueName: \"kubernetes.io/projected/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-kube-api-access-dd8md\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.365071 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.366082 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.367402 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-config\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.375807 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.386757 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.396301 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8md\" (UniqueName: \"kubernetes.io/projected/8fa63326-7a48-4c93-bad4-6ddb3d1d0731-kube-api-access-dd8md\") pod \"logging-loki-distributor-5d5548c9f5-p6kh9\" (UID: \"8fa63326-7a48-4c93-bad4-6ddb3d1d0731\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.453936 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-76767f4456-gmbbt"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.458819 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.461368 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.461625 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.461740 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.461786 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.462765 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.463026 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-76767f4456-hfv4z"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.464236 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.465990 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466039 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-wjj5h" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466047 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466075 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpdp9\" (UniqueName: \"kubernetes.io/projected/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-kube-api-access-vpdp9\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466116 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m8tn\" (UniqueName: \"kubernetes.io/projected/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-kube-api-access-7m8tn\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466151 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466172 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-config\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466244 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466285 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466322 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.466350 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-config\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.485224 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76767f4456-gmbbt"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.489815 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.494632 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76767f4456-hfv4z"] Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567111 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-lokistack-gateway\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567160 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-rbac\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567189 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567211 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-rbac\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567231 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tenants\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567264 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567292 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567319 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567339 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567357 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-config\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567380 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnlpp\" (UniqueName: \"kubernetes.io/projected/8a3d5174-0a86-43bc-bc05-a974c01aef1b-kube-api-access-bnlpp\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567405 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567423 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567444 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567470 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpdp9\" (UniqueName: \"kubernetes.io/projected/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-kube-api-access-vpdp9\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567487 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567517 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m8tn\" (UniqueName: \"kubernetes.io/projected/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-kube-api-access-7m8tn\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567534 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567551 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-config\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567571 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567587 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tls-secret\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567604 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567622 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s6cx\" (UniqueName: \"kubernetes.io/projected/f653b7f2-9a99-4426-b855-beb8dde56230-kube-api-access-2s6cx\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567641 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tls-secret\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567689 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-lokistack-gateway\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567708 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.567727 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tenants\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.569027 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.569845 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.570100 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-config\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.570259 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-config\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.578049 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.578463 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.583284 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.583316 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.590190 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m8tn\" (UniqueName: \"kubernetes.io/projected/6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4-kube-api-access-7m8tn\") pod \"logging-loki-query-frontend-6d6859c548-jr57w\" (UID: \"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.591757 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpdp9\" (UniqueName: \"kubernetes.io/projected/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-kube-api-access-vpdp9\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.596059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-cldqm\" (UID: \"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.608827 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.671991 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.680929 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnlpp\" (UniqueName: \"kubernetes.io/projected/8a3d5174-0a86-43bc-bc05-a974c01aef1b-kube-api-access-bnlpp\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.680981 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681027 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681047 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s6cx\" (UniqueName: \"kubernetes.io/projected/f653b7f2-9a99-4426-b855-beb8dde56230-kube-api-access-2s6cx\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681065 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tls-secret\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681083 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681102 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tls-secret\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681126 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-lokistack-gateway\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681149 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681173 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tenants\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-lokistack-gateway\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681215 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-rbac\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: E0224 15:03:54.681216 4982 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 24 15:03:54 crc kubenswrapper[4982]: E0224 15:03:54.681303 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tls-secret podName:8a3d5174-0a86-43bc-bc05-a974c01aef1b nodeName:}" failed. No retries permitted until 2026-02-24 15:03:55.181280195 +0000 UTC m=+896.800338688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tls-secret") pod "logging-loki-gateway-76767f4456-gmbbt" (UID: "8a3d5174-0a86-43bc-bc05-a974c01aef1b") : secret "logging-loki-gateway-http" not found Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681233 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-rbac\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681671 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tenants\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681736 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.681783 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.682628 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.683927 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.684578 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: E0224 15:03:54.684642 4982 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 24 15:03:54 crc kubenswrapper[4982]: E0224 15:03:54.684672 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tls-secret podName:f653b7f2-9a99-4426-b855-beb8dde56230 nodeName:}" failed. No retries permitted until 2026-02-24 15:03:55.184661466 +0000 UTC m=+896.803719959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tls-secret") pod "logging-loki-gateway-76767f4456-hfv4z" (UID: "f653b7f2-9a99-4426-b855-beb8dde56230") : secret "logging-loki-gateway-http" not found Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.685612 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-rbac\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.685814 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tenants\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.685899 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8a3d5174-0a86-43bc-bc05-a974c01aef1b-lokistack-gateway\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.686120 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.691538 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.698693 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-lokistack-gateway\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.700754 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.700874 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tenants\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.703910 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f653b7f2-9a99-4426-b855-beb8dde56230-rbac\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.704703 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnlpp\" (UniqueName: \"kubernetes.io/projected/8a3d5174-0a86-43bc-bc05-a974c01aef1b-kube-api-access-bnlpp\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.705867 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s6cx\" (UniqueName: \"kubernetes.io/projected/f653b7f2-9a99-4426-b855-beb8dde56230-kube-api-access-2s6cx\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:54 crc kubenswrapper[4982]: I0224 15:03:54.947646 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9"] Feb 24 15:03:55 crc kubenswrapper[4982]: W0224 15:03:55.079186 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c5894c9_7c05_4c3f_9ae1_a75f77f7f37a.slice/crio-6526ae188f2d46ccaf9201f209af2cc73d7d60e401a55d0cc4e2726ab504bdbd WatchSource:0}: Error finding container 6526ae188f2d46ccaf9201f209af2cc73d7d60e401a55d0cc4e2726ab504bdbd: Status 404 returned error can't find the container with id 6526ae188f2d46ccaf9201f209af2cc73d7d60e401a55d0cc4e2726ab504bdbd Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.079951 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-cldqm"] Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.159360 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w"] Feb 24 15:03:55 crc kubenswrapper[4982]: W0224 15:03:55.161347 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e368d67_d0ff_4b7e_b9f4_fe631e4db4e4.slice/crio-8e6d963c72388838bcdf03d33dc87b8e8d7a0076438c0284d18009bccb1a5a19 WatchSource:0}: Error finding container 8e6d963c72388838bcdf03d33dc87b8e8d7a0076438c0284d18009bccb1a5a19: Status 404 returned error can't find the container with id 8e6d963c72388838bcdf03d33dc87b8e8d7a0076438c0284d18009bccb1a5a19 Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.188748 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tls-secret\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.189104 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tls-secret\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.192486 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f653b7f2-9a99-4426-b855-beb8dde56230-tls-secret\") pod \"logging-loki-gateway-76767f4456-hfv4z\" (UID: \"f653b7f2-9a99-4426-b855-beb8dde56230\") " pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.193207 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8a3d5174-0a86-43bc-bc05-a974c01aef1b-tls-secret\") pod \"logging-loki-gateway-76767f4456-gmbbt\" (UID: \"8a3d5174-0a86-43bc-bc05-a974c01aef1b\") " pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.264386 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.265401 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.267552 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.273523 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.281717 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.350694 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.352217 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.354577 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.354751 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.358561 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.380404 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.387988 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.411571 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22v8\" (UniqueName: \"kubernetes.io/projected/f4e9d226-da8e-46e8-b378-5aafba527e2c-kube-api-access-l22v8\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.411647 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.411674 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.411694 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.411732 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e9d226-da8e-46e8-b378-5aafba527e2c-config\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.411885 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d943049e-b17d-4c5e-9590-5136582d2f31\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d943049e-b17d-4c5e-9590-5136582d2f31\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.411988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.412135 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8b61845d-afa7-407b-9e65-a24634aad97b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b61845d-afa7-407b-9e65-a24634aad97b\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.423221 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.424198 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.426643 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.427212 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.433402 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518470 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518549 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518577 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518613 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518644 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb67e33-3f41-4046-930e-babb8b65f3cc-config\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518664 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7n9\" (UniqueName: \"kubernetes.io/projected/8eb67e33-3f41-4046-930e-babb8b65f3cc-kube-api-access-wf7n9\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518698 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518723 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8b61845d-afa7-407b-9e65-a24634aad97b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b61845d-afa7-407b-9e65-a24634aad97b\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518757 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518777 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l22v8\" (UniqueName: \"kubernetes.io/projected/f4e9d226-da8e-46e8-b378-5aafba527e2c-kube-api-access-l22v8\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518809 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b53d5d5-a40c-4571-a3a8-2d7df79c7651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b53d5d5-a40c-4571-a3a8-2d7df79c7651\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518840 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518951 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e9d226-da8e-46e8-b378-5aafba527e2c-config\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.518994 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.519036 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d943049e-b17d-4c5e-9590-5136582d2f31\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d943049e-b17d-4c5e-9590-5136582d2f31\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.519058 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.519074 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.519101 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.519129 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3053cc27-e7fb-460e-84f8-92085a6aa8e5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.519153 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.519183 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75v9\" (UniqueName: \"kubernetes.io/projected/3053cc27-e7fb-460e-84f8-92085a6aa8e5-kube-api-access-v75v9\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.519201 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac27034f-de86-44ce-bb0d-b5a2b1e81858\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac27034f-de86-44ce-bb0d-b5a2b1e81858\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.521340 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e9d226-da8e-46e8-b378-5aafba527e2c-config\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.522029 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.527158 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.533158 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.533267 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f4e9d226-da8e-46e8-b378-5aafba527e2c-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.535187 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.535219 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.535220 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d943049e-b17d-4c5e-9590-5136582d2f31\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d943049e-b17d-4c5e-9590-5136582d2f31\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bb6d491748eec9912272ca99ac6dcbcedeb24bb177d9a2f023e18cd21bcd775d/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.535243 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8b61845d-afa7-407b-9e65-a24634aad97b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b61845d-afa7-407b-9e65-a24634aad97b\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cdb06e38e44bd66b1feb43eb4f364bbddd48822bb556c95b1c7e0faff80e936e/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.543280 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" event={"ID":"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4","Type":"ContainerStarted","Data":"8e6d963c72388838bcdf03d33dc87b8e8d7a0076438c0284d18009bccb1a5a19"} Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.544475 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22v8\" (UniqueName: \"kubernetes.io/projected/f4e9d226-da8e-46e8-b378-5aafba527e2c-kube-api-access-l22v8\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.545394 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" event={"ID":"8fa63326-7a48-4c93-bad4-6ddb3d1d0731","Type":"ContainerStarted","Data":"a400f40a4fa528d2d2c1815893dca248e96538a599e628c6287e8aea3c3e9e3f"} Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.546919 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" event={"ID":"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a","Type":"ContainerStarted","Data":"6526ae188f2d46ccaf9201f209af2cc73d7d60e401a55d0cc4e2726ab504bdbd"} Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.573327 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8b61845d-afa7-407b-9e65-a24634aad97b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8b61845d-afa7-407b-9e65-a24634aad97b\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.574214 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d943049e-b17d-4c5e-9590-5136582d2f31\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d943049e-b17d-4c5e-9590-5136582d2f31\") pod \"logging-loki-ingester-0\" (UID: \"f4e9d226-da8e-46e8-b378-5aafba527e2c\") " pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.612586 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620434 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620477 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620511 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3053cc27-e7fb-460e-84f8-92085a6aa8e5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620533 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620554 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v75v9\" (UniqueName: \"kubernetes.io/projected/3053cc27-e7fb-460e-84f8-92085a6aa8e5-kube-api-access-v75v9\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620575 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ac27034f-de86-44ce-bb0d-b5a2b1e81858\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac27034f-de86-44ce-bb0d-b5a2b1e81858\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620600 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620638 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620662 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb67e33-3f41-4046-930e-babb8b65f3cc-config\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620678 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620710 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7n9\" (UniqueName: \"kubernetes.io/projected/8eb67e33-3f41-4046-930e-babb8b65f3cc-kube-api-access-wf7n9\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620739 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620784 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b53d5d5-a40c-4571-a3a8-2d7df79c7651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b53d5d5-a40c-4571-a3a8-2d7df79c7651\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.620818 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.622645 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.623603 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eb67e33-3f41-4046-930e-babb8b65f3cc-config\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.625014 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.626555 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.626707 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.627971 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3053cc27-e7fb-460e-84f8-92085a6aa8e5-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.628302 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8eb67e33-3f41-4046-930e-babb8b65f3cc-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.630403 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.636452 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.636526 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ac27034f-de86-44ce-bb0d-b5a2b1e81858\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac27034f-de86-44ce-bb0d-b5a2b1e81858\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/05833ac4e844585ff4800455666e05593599aa599e1455531e7c873e492fba4e/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.636559 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.636613 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b53d5d5-a40c-4571-a3a8-2d7df79c7651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b53d5d5-a40c-4571-a3a8-2d7df79c7651\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4449db6dcc2d36410559758bcb742653b78f0141cf1fbbbb840deb70ed4012ef/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.636814 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.641534 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3053cc27-e7fb-460e-84f8-92085a6aa8e5-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.641897 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v75v9\" (UniqueName: \"kubernetes.io/projected/3053cc27-e7fb-460e-84f8-92085a6aa8e5-kube-api-access-v75v9\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.642711 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7n9\" (UniqueName: \"kubernetes.io/projected/8eb67e33-3f41-4046-930e-babb8b65f3cc-kube-api-access-wf7n9\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.668178 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b53d5d5-a40c-4571-a3a8-2d7df79c7651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b53d5d5-a40c-4571-a3a8-2d7df79c7651\") pod \"logging-loki-index-gateway-0\" (UID: \"3053cc27-e7fb-460e-84f8-92085a6aa8e5\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.670276 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ac27034f-de86-44ce-bb0d-b5a2b1e81858\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac27034f-de86-44ce-bb0d-b5a2b1e81858\") pod \"logging-loki-compactor-0\" (UID: \"8eb67e33-3f41-4046-930e-babb8b65f3cc\") " pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.794725 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.826426 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76767f4456-gmbbt"] Feb 24 15:03:55 crc kubenswrapper[4982]: W0224 15:03:55.831380 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a3d5174_0a86_43bc_bc05_a974c01aef1b.slice/crio-1f73f83ba993002c872bb519237a4bbf874158fcda7ef0ebe74e5d2b4cc4fd61 WatchSource:0}: Error finding container 1f73f83ba993002c872bb519237a4bbf874158fcda7ef0ebe74e5d2b4cc4fd61: Status 404 returned error can't find the container with id 1f73f83ba993002c872bb519237a4bbf874158fcda7ef0ebe74e5d2b4cc4fd61 Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.911042 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76767f4456-hfv4z"] Feb 24 15:03:55 crc kubenswrapper[4982]: W0224 15:03:55.922702 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf653b7f2_9a99_4426_b855_beb8dde56230.slice/crio-abbcfbacbf02358b398003fe7d2f0edea5f1e404a16849abc082136692e78366 WatchSource:0}: Error finding container abbcfbacbf02358b398003fe7d2f0edea5f1e404a16849abc082136692e78366: Status 404 returned error can't find the container with id abbcfbacbf02358b398003fe7d2f0edea5f1e404a16849abc082136692e78366 Feb 24 15:03:55 crc kubenswrapper[4982]: I0224 15:03:55.969830 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:56 crc kubenswrapper[4982]: I0224 15:03:56.007393 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 24 15:03:56 crc kubenswrapper[4982]: I0224 15:03:56.206434 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 24 15:03:56 crc kubenswrapper[4982]: W0224 15:03:56.211204 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3053cc27_e7fb_460e_84f8_92085a6aa8e5.slice/crio-05c5a6827670294603b80a4e10aa198f70ed109d1498a0b5d729293e668b78c2 WatchSource:0}: Error finding container 05c5a6827670294603b80a4e10aa198f70ed109d1498a0b5d729293e668b78c2: Status 404 returned error can't find the container with id 05c5a6827670294603b80a4e10aa198f70ed109d1498a0b5d729293e668b78c2 Feb 24 15:03:56 crc kubenswrapper[4982]: I0224 15:03:56.219185 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 24 15:03:56 crc kubenswrapper[4982]: W0224 15:03:56.225743 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eb67e33_3f41_4046_930e_babb8b65f3cc.slice/crio-a947722df7e04967ab1dc27f7cc614c3c5b43c334777bc9d7fdc50ed057697cc WatchSource:0}: Error finding container a947722df7e04967ab1dc27f7cc614c3c5b43c334777bc9d7fdc50ed057697cc: Status 404 returned error can't find the container with id a947722df7e04967ab1dc27f7cc614c3c5b43c334777bc9d7fdc50ed057697cc Feb 24 15:03:56 crc kubenswrapper[4982]: I0224 15:03:56.555242 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8eb67e33-3f41-4046-930e-babb8b65f3cc","Type":"ContainerStarted","Data":"a947722df7e04967ab1dc27f7cc614c3c5b43c334777bc9d7fdc50ed057697cc"} Feb 24 15:03:56 crc kubenswrapper[4982]: I0224 15:03:56.556556 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" event={"ID":"f653b7f2-9a99-4426-b855-beb8dde56230","Type":"ContainerStarted","Data":"abbcfbacbf02358b398003fe7d2f0edea5f1e404a16849abc082136692e78366"} Feb 24 15:03:56 crc kubenswrapper[4982]: I0224 15:03:56.558179 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f4e9d226-da8e-46e8-b378-5aafba527e2c","Type":"ContainerStarted","Data":"094c7bb754bab89d3870416408b850b797c85274b06576c4cb8aedddffdf2b89"} Feb 24 15:03:56 crc kubenswrapper[4982]: I0224 15:03:56.560459 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" event={"ID":"8a3d5174-0a86-43bc-bc05-a974c01aef1b","Type":"ContainerStarted","Data":"1f73f83ba993002c872bb519237a4bbf874158fcda7ef0ebe74e5d2b4cc4fd61"} Feb 24 15:03:56 crc kubenswrapper[4982]: I0224 15:03:56.561717 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3053cc27-e7fb-460e-84f8-92085a6aa8e5","Type":"ContainerStarted","Data":"05c5a6827670294603b80a4e10aa198f70ed109d1498a0b5d729293e668b78c2"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.589300 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" event={"ID":"8a3d5174-0a86-43bc-bc05-a974c01aef1b","Type":"ContainerStarted","Data":"4dfe5929daf5fbc890acca5b7da20b364d203144800528eb959f0ba7b8d02178"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.590875 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3053cc27-e7fb-460e-84f8-92085a6aa8e5","Type":"ContainerStarted","Data":"98250340a5e0f2751858a4733483a29947a1ae7c9747f51487df0359b93c0f20"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.590981 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.592607 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" event={"ID":"8fa63326-7a48-4c93-bad4-6ddb3d1d0731","Type":"ContainerStarted","Data":"cc6c43aad0c34efd4a8365c7e8248ef2d152b7056cace1bec0e6d40471ae1ee5"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.592732 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.594116 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8eb67e33-3f41-4046-930e-babb8b65f3cc","Type":"ContainerStarted","Data":"864c2a1c6d0eab0ab857d3dd6390484a79e2d162124a5f193f1884ddf97108c8"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.594257 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.595400 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" event={"ID":"6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4","Type":"ContainerStarted","Data":"eebf6303b42a13460645b55629be8b15973b01094a0d783d5164edab1e8b2f30"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.595459 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.596797 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" event={"ID":"f653b7f2-9a99-4426-b855-beb8dde56230","Type":"ContainerStarted","Data":"bbb9a379fb62b124ebea7019e16f4854e5e79589e1675163e3bf40356eeb8669"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.598235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" event={"ID":"3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a","Type":"ContainerStarted","Data":"2eaa27c45347423bc52ba3b6c98adab54c439e241b0324936a73fdbb4bdf6ddc"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.598344 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.599855 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f4e9d226-da8e-46e8-b378-5aafba527e2c","Type":"ContainerStarted","Data":"ee273aff16d54e8c96e4b42f8d757d375559955440e34d026f87fc83bd4600a0"} Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.600350 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.609616 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.832074479 podStartE2EDuration="5.609595395s" podCreationTimestamp="2026-02-24 15:03:54 +0000 UTC" firstStartedPulling="2026-02-24 15:03:56.214293708 +0000 UTC m=+897.833352201" lastFinishedPulling="2026-02-24 15:03:58.991814594 +0000 UTC m=+900.610873117" observedRunningTime="2026-02-24 15:03:59.605893635 +0000 UTC m=+901.224952148" watchObservedRunningTime="2026-02-24 15:03:59.609595395 +0000 UTC m=+901.228653908" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.625336 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" podStartSLOduration=1.690785896 podStartE2EDuration="5.625319488s" podCreationTimestamp="2026-02-24 15:03:54 +0000 UTC" firstStartedPulling="2026-02-24 15:03:55.084030213 +0000 UTC m=+896.703088706" lastFinishedPulling="2026-02-24 15:03:59.018563795 +0000 UTC m=+900.637622298" observedRunningTime="2026-02-24 15:03:59.621151006 +0000 UTC m=+901.240209509" watchObservedRunningTime="2026-02-24 15:03:59.625319488 +0000 UTC m=+901.244377981" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.644513 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.85551222 podStartE2EDuration="5.644476294s" podCreationTimestamp="2026-02-24 15:03:54 +0000 UTC" firstStartedPulling="2026-02-24 15:03:56.228790358 +0000 UTC m=+897.847848851" lastFinishedPulling="2026-02-24 15:03:59.017754392 +0000 UTC m=+900.636812925" observedRunningTime="2026-02-24 15:03:59.642993054 +0000 UTC m=+901.262051547" watchObservedRunningTime="2026-02-24 15:03:59.644476294 +0000 UTC m=+901.263534787" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.669844 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" podStartSLOduration=1.901886824 podStartE2EDuration="5.669819837s" podCreationTimestamp="2026-02-24 15:03:54 +0000 UTC" firstStartedPulling="2026-02-24 15:03:55.163926626 +0000 UTC m=+896.782985119" lastFinishedPulling="2026-02-24 15:03:58.931859649 +0000 UTC m=+900.550918132" observedRunningTime="2026-02-24 15:03:59.660054194 +0000 UTC m=+901.279112707" watchObservedRunningTime="2026-02-24 15:03:59.669819837 +0000 UTC m=+901.288878330" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.677129 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" podStartSLOduration=1.5705659079999998 podStartE2EDuration="5.677112433s" podCreationTimestamp="2026-02-24 15:03:54 +0000 UTC" firstStartedPulling="2026-02-24 15:03:54.944949957 +0000 UTC m=+896.564008450" lastFinishedPulling="2026-02-24 15:03:59.051496472 +0000 UTC m=+900.670554975" observedRunningTime="2026-02-24 15:03:59.673096545 +0000 UTC m=+901.292155058" watchObservedRunningTime="2026-02-24 15:03:59.677112433 +0000 UTC m=+901.296170926" Feb 24 15:03:59 crc kubenswrapper[4982]: I0224 15:03:59.691056 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.71776657 podStartE2EDuration="5.691035108s" podCreationTimestamp="2026-02-24 15:03:54 +0000 UTC" firstStartedPulling="2026-02-24 15:03:56.018214457 +0000 UTC m=+897.637272950" lastFinishedPulling="2026-02-24 15:03:58.991482995 +0000 UTC m=+900.610541488" observedRunningTime="2026-02-24 15:03:59.690552695 +0000 UTC m=+901.309611188" watchObservedRunningTime="2026-02-24 15:03:59.691035108 +0000 UTC m=+901.310093611" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.159892 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532424-4mpf8"] Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.161328 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532424-4mpf8" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.165410 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.166126 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.167458 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.173196 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532424-4mpf8"] Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.303822 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpcvq\" (UniqueName: \"kubernetes.io/projected/6e34c8b9-31ce-4317-9448-afa278a07724-kube-api-access-xpcvq\") pod \"auto-csr-approver-29532424-4mpf8\" (UID: \"6e34c8b9-31ce-4317-9448-afa278a07724\") " pod="openshift-infra/auto-csr-approver-29532424-4mpf8" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.405194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpcvq\" (UniqueName: \"kubernetes.io/projected/6e34c8b9-31ce-4317-9448-afa278a07724-kube-api-access-xpcvq\") pod \"auto-csr-approver-29532424-4mpf8\" (UID: \"6e34c8b9-31ce-4317-9448-afa278a07724\") " pod="openshift-infra/auto-csr-approver-29532424-4mpf8" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.426961 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpcvq\" (UniqueName: \"kubernetes.io/projected/6e34c8b9-31ce-4317-9448-afa278a07724-kube-api-access-xpcvq\") pod \"auto-csr-approver-29532424-4mpf8\" (UID: \"6e34c8b9-31ce-4317-9448-afa278a07724\") " pod="openshift-infra/auto-csr-approver-29532424-4mpf8" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.476523 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532424-4mpf8" Feb 24 15:04:00 crc kubenswrapper[4982]: I0224 15:04:00.707710 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532424-4mpf8"] Feb 24 15:04:01 crc kubenswrapper[4982]: I0224 15:04:01.619897 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532424-4mpf8" event={"ID":"6e34c8b9-31ce-4317-9448-afa278a07724","Type":"ContainerStarted","Data":"fc3bb6a37c22e05f8c7abec1f62e30a23697005cb24d7c19063cb7ee6359a313"} Feb 24 15:04:02 crc kubenswrapper[4982]: I0224 15:04:02.640130 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" event={"ID":"8a3d5174-0a86-43bc-bc05-a974c01aef1b","Type":"ContainerStarted","Data":"40711d6ac83f08310663197b3bb0dcabe9a06a1c6e987c62697104dc840619a5"} Feb 24 15:04:02 crc kubenswrapper[4982]: I0224 15:04:02.641174 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:04:02 crc kubenswrapper[4982]: I0224 15:04:02.641244 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:04:02 crc kubenswrapper[4982]: I0224 15:04:02.645474 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e34c8b9-31ce-4317-9448-afa278a07724" containerID="5c04d4c0b5153e3bc2cb4f8c5e01966537b088949fa13ef16466a5117a562180" exitCode=0 Feb 24 15:04:02 crc kubenswrapper[4982]: I0224 15:04:02.645692 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532424-4mpf8" event={"ID":"6e34c8b9-31ce-4317-9448-afa278a07724","Type":"ContainerDied","Data":"5c04d4c0b5153e3bc2cb4f8c5e01966537b088949fa13ef16466a5117a562180"} Feb 24 15:04:02 crc kubenswrapper[4982]: I0224 15:04:02.662029 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:04:02 crc kubenswrapper[4982]: I0224 15:04:02.662562 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" Feb 24 15:04:02 crc kubenswrapper[4982]: I0224 15:04:02.689464 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-76767f4456-gmbbt" podStartSLOduration=3.022333475 podStartE2EDuration="8.689426854s" podCreationTimestamp="2026-02-24 15:03:54 +0000 UTC" firstStartedPulling="2026-02-24 15:03:55.834319444 +0000 UTC m=+897.453377937" lastFinishedPulling="2026-02-24 15:04:01.501412813 +0000 UTC m=+903.120471316" observedRunningTime="2026-02-24 15:04:02.671612564 +0000 UTC m=+904.290671057" watchObservedRunningTime="2026-02-24 15:04:02.689426854 +0000 UTC m=+904.308485387" Feb 24 15:04:04 crc kubenswrapper[4982]: I0224 15:04:04.002287 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532424-4mpf8" Feb 24 15:04:04 crc kubenswrapper[4982]: I0224 15:04:04.071243 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpcvq\" (UniqueName: \"kubernetes.io/projected/6e34c8b9-31ce-4317-9448-afa278a07724-kube-api-access-xpcvq\") pod \"6e34c8b9-31ce-4317-9448-afa278a07724\" (UID: \"6e34c8b9-31ce-4317-9448-afa278a07724\") " Feb 24 15:04:04 crc kubenswrapper[4982]: I0224 15:04:04.076370 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e34c8b9-31ce-4317-9448-afa278a07724-kube-api-access-xpcvq" (OuterVolumeSpecName: "kube-api-access-xpcvq") pod "6e34c8b9-31ce-4317-9448-afa278a07724" (UID: "6e34c8b9-31ce-4317-9448-afa278a07724"). InnerVolumeSpecName "kube-api-access-xpcvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:04:04 crc kubenswrapper[4982]: I0224 15:04:04.174206 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpcvq\" (UniqueName: \"kubernetes.io/projected/6e34c8b9-31ce-4317-9448-afa278a07724-kube-api-access-xpcvq\") on node \"crc\" DevicePath \"\"" Feb 24 15:04:04 crc kubenswrapper[4982]: I0224 15:04:04.667708 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532424-4mpf8" event={"ID":"6e34c8b9-31ce-4317-9448-afa278a07724","Type":"ContainerDied","Data":"fc3bb6a37c22e05f8c7abec1f62e30a23697005cb24d7c19063cb7ee6359a313"} Feb 24 15:04:04 crc kubenswrapper[4982]: I0224 15:04:04.667780 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc3bb6a37c22e05f8c7abec1f62e30a23697005cb24d7c19063cb7ee6359a313" Feb 24 15:04:04 crc kubenswrapper[4982]: I0224 15:04:04.667723 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532424-4mpf8" Feb 24 15:04:05 crc kubenswrapper[4982]: I0224 15:04:05.061575 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532418-cbbzs"] Feb 24 15:04:05 crc kubenswrapper[4982]: I0224 15:04:05.066952 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532418-cbbzs"] Feb 24 15:04:05 crc kubenswrapper[4982]: I0224 15:04:05.157031 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9" path="/var/lib/kubelet/pods/9d435ec8-8dc8-4805-8dea-ba8c3d2da6b9/volumes" Feb 24 15:04:05 crc kubenswrapper[4982]: I0224 15:04:05.675996 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" event={"ID":"f653b7f2-9a99-4426-b855-beb8dde56230","Type":"ContainerStarted","Data":"b9081840a3cd6270c656cef09c45986d69d4742f69bc2b874fec6f8ec43b1119"} Feb 24 15:04:05 crc kubenswrapper[4982]: I0224 15:04:05.676313 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:04:05 crc kubenswrapper[4982]: I0224 15:04:05.697810 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:04:05 crc kubenswrapper[4982]: I0224 15:04:05.705169 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" podStartSLOduration=2.479609276 podStartE2EDuration="11.705149706s" podCreationTimestamp="2026-02-24 15:03:54 +0000 UTC" firstStartedPulling="2026-02-24 15:03:55.930167275 +0000 UTC m=+897.549225768" lastFinishedPulling="2026-02-24 15:04:05.155707705 +0000 UTC m=+906.774766198" observedRunningTime="2026-02-24 15:04:05.696165473 +0000 UTC m=+907.315223966" watchObservedRunningTime="2026-02-24 15:04:05.705149706 +0000 UTC m=+907.324208209" Feb 24 15:04:06 crc kubenswrapper[4982]: I0224 15:04:06.685292 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:04:06 crc kubenswrapper[4982]: I0224 15:04:06.701493 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76767f4456-hfv4z" Feb 24 15:04:08 crc kubenswrapper[4982]: I0224 15:04:08.738105 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:04:08 crc kubenswrapper[4982]: I0224 15:04:08.739163 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:04:08 crc kubenswrapper[4982]: I0224 15:04:08.739292 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:04:08 crc kubenswrapper[4982]: I0224 15:04:08.740233 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07b5b5ef08503bdcb3a99da20553629509399385e0be96239d55f7f7f354eb91"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:04:08 crc kubenswrapper[4982]: I0224 15:04:08.740379 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://07b5b5ef08503bdcb3a99da20553629509399385e0be96239d55f7f7f354eb91" gracePeriod=600 Feb 24 15:04:09 crc kubenswrapper[4982]: I0224 15:04:09.716094 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="07b5b5ef08503bdcb3a99da20553629509399385e0be96239d55f7f7f354eb91" exitCode=0 Feb 24 15:04:09 crc kubenswrapper[4982]: I0224 15:04:09.716196 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"07b5b5ef08503bdcb3a99da20553629509399385e0be96239d55f7f7f354eb91"} Feb 24 15:04:09 crc kubenswrapper[4982]: I0224 15:04:09.716561 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"aa114d36019ce5147a672d1a1ffc47f09215e89baaa3aa8a9c736d89a2586a36"} Feb 24 15:04:09 crc kubenswrapper[4982]: I0224 15:04:09.716614 4982 scope.go:117] "RemoveContainer" containerID="e5a98397f8b5beef975d846a08d561095dbc655637a46095abad7d674ae42009" Feb 24 15:04:14 crc kubenswrapper[4982]: I0224 15:04:14.499847 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-p6kh9" Feb 24 15:04:14 crc kubenswrapper[4982]: I0224 15:04:14.618671 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-cldqm" Feb 24 15:04:14 crc kubenswrapper[4982]: I0224 15:04:14.686369 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-jr57w" Feb 24 15:04:15 crc kubenswrapper[4982]: I0224 15:04:15.623357 4982 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 24 15:04:15 crc kubenswrapper[4982]: I0224 15:04:15.623450 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f4e9d226-da8e-46e8-b378-5aafba527e2c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:04:15 crc kubenswrapper[4982]: I0224 15:04:15.805138 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 24 15:04:15 crc kubenswrapper[4982]: I0224 15:04:15.979582 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 24 15:04:19 crc kubenswrapper[4982]: I0224 15:04:19.879883 4982 scope.go:117] "RemoveContainer" containerID="ef076d30211a7b15f92bbf597b640a94664865225364d019d17061fa07d3fae9" Feb 24 15:04:24 crc kubenswrapper[4982]: I0224 15:04:24.987402 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fq8qs"] Feb 24 15:04:24 crc kubenswrapper[4982]: E0224 15:04:24.988190 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e34c8b9-31ce-4317-9448-afa278a07724" containerName="oc" Feb 24 15:04:24 crc kubenswrapper[4982]: I0224 15:04:24.988205 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e34c8b9-31ce-4317-9448-afa278a07724" containerName="oc" Feb 24 15:04:24 crc kubenswrapper[4982]: I0224 15:04:24.988413 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e34c8b9-31ce-4317-9448-afa278a07724" containerName="oc" Feb 24 15:04:24 crc kubenswrapper[4982]: I0224 15:04:24.989673 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.011084 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fq8qs"] Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.162857 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-utilities\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.163271 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9mm9\" (UniqueName: \"kubernetes.io/projected/f0dd83ed-46a8-45df-b75c-1e4d74939bee-kube-api-access-g9mm9\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.163802 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-catalog-content\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.267397 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9mm9\" (UniqueName: \"kubernetes.io/projected/f0dd83ed-46a8-45df-b75c-1e4d74939bee-kube-api-access-g9mm9\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.267455 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-catalog-content\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.267493 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-utilities\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.267891 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-utilities\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.268006 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-catalog-content\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.297355 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9mm9\" (UniqueName: \"kubernetes.io/projected/f0dd83ed-46a8-45df-b75c-1e4d74939bee-kube-api-access-g9mm9\") pod \"community-operators-fq8qs\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.310212 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.634678 4982 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.634940 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f4e9d226-da8e-46e8-b378-5aafba527e2c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:04:25 crc kubenswrapper[4982]: I0224 15:04:25.859174 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fq8qs"] Feb 24 15:04:25 crc kubenswrapper[4982]: W0224 15:04:25.865403 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0dd83ed_46a8_45df_b75c_1e4d74939bee.slice/crio-06adcc1424e801913fd44fb249dacc41fd2744c747879ea479753f2dc4189a1b WatchSource:0}: Error finding container 06adcc1424e801913fd44fb249dacc41fd2744c747879ea479753f2dc4189a1b: Status 404 returned error can't find the container with id 06adcc1424e801913fd44fb249dacc41fd2744c747879ea479753f2dc4189a1b Feb 24 15:04:26 crc kubenswrapper[4982]: I0224 15:04:26.872835 4982 generic.go:334] "Generic (PLEG): container finished" podID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerID="d628b930ec85bbe335374b1d22d3cdc36d1b90ffc988dedf8f80bc53e99e1ac7" exitCode=0 Feb 24 15:04:26 crc kubenswrapper[4982]: I0224 15:04:26.872922 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq8qs" event={"ID":"f0dd83ed-46a8-45df-b75c-1e4d74939bee","Type":"ContainerDied","Data":"d628b930ec85bbe335374b1d22d3cdc36d1b90ffc988dedf8f80bc53e99e1ac7"} Feb 24 15:04:26 crc kubenswrapper[4982]: I0224 15:04:26.872979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq8qs" event={"ID":"f0dd83ed-46a8-45df-b75c-1e4d74939bee","Type":"ContainerStarted","Data":"06adcc1424e801913fd44fb249dacc41fd2744c747879ea479753f2dc4189a1b"} Feb 24 15:04:27 crc kubenswrapper[4982]: I0224 15:04:27.882197 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq8qs" event={"ID":"f0dd83ed-46a8-45df-b75c-1e4d74939bee","Type":"ContainerStarted","Data":"7f07cebfdb6c061bca6427fcda101fd785fc86023a298f7c24092a006f5e0ee2"} Feb 24 15:04:28 crc kubenswrapper[4982]: I0224 15:04:28.893776 4982 generic.go:334] "Generic (PLEG): container finished" podID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerID="7f07cebfdb6c061bca6427fcda101fd785fc86023a298f7c24092a006f5e0ee2" exitCode=0 Feb 24 15:04:28 crc kubenswrapper[4982]: I0224 15:04:28.893893 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq8qs" event={"ID":"f0dd83ed-46a8-45df-b75c-1e4d74939bee","Type":"ContainerDied","Data":"7f07cebfdb6c061bca6427fcda101fd785fc86023a298f7c24092a006f5e0ee2"} Feb 24 15:04:29 crc kubenswrapper[4982]: I0224 15:04:29.902591 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq8qs" event={"ID":"f0dd83ed-46a8-45df-b75c-1e4d74939bee","Type":"ContainerStarted","Data":"c1d5212000b218d903a6c11ae61a5c5e760cb7f5cfb55825b1bb15e1fb52b04a"} Feb 24 15:04:29 crc kubenswrapper[4982]: I0224 15:04:29.923177 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fq8qs" podStartSLOduration=3.502680187 podStartE2EDuration="5.923159245s" podCreationTimestamp="2026-02-24 15:04:24 +0000 UTC" firstStartedPulling="2026-02-24 15:04:26.875412741 +0000 UTC m=+928.494471274" lastFinishedPulling="2026-02-24 15:04:29.295891839 +0000 UTC m=+930.914950332" observedRunningTime="2026-02-24 15:04:29.921568542 +0000 UTC m=+931.540627035" watchObservedRunningTime="2026-02-24 15:04:29.923159245 +0000 UTC m=+931.542217728" Feb 24 15:04:35 crc kubenswrapper[4982]: I0224 15:04:35.311367 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:35 crc kubenswrapper[4982]: I0224 15:04:35.311999 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:35 crc kubenswrapper[4982]: I0224 15:04:35.377799 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:35 crc kubenswrapper[4982]: I0224 15:04:35.619424 4982 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 24 15:04:35 crc kubenswrapper[4982]: I0224 15:04:35.619472 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f4e9d226-da8e-46e8-b378-5aafba527e2c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:04:36 crc kubenswrapper[4982]: I0224 15:04:36.027810 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:36 crc kubenswrapper[4982]: I0224 15:04:36.096358 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fq8qs"] Feb 24 15:04:37 crc kubenswrapper[4982]: I0224 15:04:37.977709 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fq8qs" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerName="registry-server" containerID="cri-o://c1d5212000b218d903a6c11ae61a5c5e760cb7f5cfb55825b1bb15e1fb52b04a" gracePeriod=2 Feb 24 15:04:38 crc kubenswrapper[4982]: I0224 15:04:38.988230 4982 generic.go:334] "Generic (PLEG): container finished" podID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerID="c1d5212000b218d903a6c11ae61a5c5e760cb7f5cfb55825b1bb15e1fb52b04a" exitCode=0 Feb 24 15:04:38 crc kubenswrapper[4982]: I0224 15:04:38.988312 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq8qs" event={"ID":"f0dd83ed-46a8-45df-b75c-1e4d74939bee","Type":"ContainerDied","Data":"c1d5212000b218d903a6c11ae61a5c5e760cb7f5cfb55825b1bb15e1fb52b04a"} Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.564688 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.701013 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-utilities\") pod \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.701074 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9mm9\" (UniqueName: \"kubernetes.io/projected/f0dd83ed-46a8-45df-b75c-1e4d74939bee-kube-api-access-g9mm9\") pod \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.701122 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-catalog-content\") pod \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\" (UID: \"f0dd83ed-46a8-45df-b75c-1e4d74939bee\") " Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.702835 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-utilities" (OuterVolumeSpecName: "utilities") pod "f0dd83ed-46a8-45df-b75c-1e4d74939bee" (UID: "f0dd83ed-46a8-45df-b75c-1e4d74939bee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.707325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0dd83ed-46a8-45df-b75c-1e4d74939bee-kube-api-access-g9mm9" (OuterVolumeSpecName: "kube-api-access-g9mm9") pod "f0dd83ed-46a8-45df-b75c-1e4d74939bee" (UID: "f0dd83ed-46a8-45df-b75c-1e4d74939bee"). InnerVolumeSpecName "kube-api-access-g9mm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.756027 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0dd83ed-46a8-45df-b75c-1e4d74939bee" (UID: "f0dd83ed-46a8-45df-b75c-1e4d74939bee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.803105 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.803136 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9mm9\" (UniqueName: \"kubernetes.io/projected/f0dd83ed-46a8-45df-b75c-1e4d74939bee-kube-api-access-g9mm9\") on node \"crc\" DevicePath \"\"" Feb 24 15:04:39 crc kubenswrapper[4982]: I0224 15:04:39.803147 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0dd83ed-46a8-45df-b75c-1e4d74939bee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:04:40 crc kubenswrapper[4982]: I0224 15:04:40.001331 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq8qs" event={"ID":"f0dd83ed-46a8-45df-b75c-1e4d74939bee","Type":"ContainerDied","Data":"06adcc1424e801913fd44fb249dacc41fd2744c747879ea479753f2dc4189a1b"} Feb 24 15:04:40 crc kubenswrapper[4982]: I0224 15:04:40.001383 4982 scope.go:117] "RemoveContainer" containerID="c1d5212000b218d903a6c11ae61a5c5e760cb7f5cfb55825b1bb15e1fb52b04a" Feb 24 15:04:40 crc kubenswrapper[4982]: I0224 15:04:40.001429 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fq8qs" Feb 24 15:04:40 crc kubenswrapper[4982]: I0224 15:04:40.035039 4982 scope.go:117] "RemoveContainer" containerID="7f07cebfdb6c061bca6427fcda101fd785fc86023a298f7c24092a006f5e0ee2" Feb 24 15:04:40 crc kubenswrapper[4982]: I0224 15:04:40.051604 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fq8qs"] Feb 24 15:04:40 crc kubenswrapper[4982]: I0224 15:04:40.072370 4982 scope.go:117] "RemoveContainer" containerID="d628b930ec85bbe335374b1d22d3cdc36d1b90ffc988dedf8f80bc53e99e1ac7" Feb 24 15:04:40 crc kubenswrapper[4982]: I0224 15:04:40.080896 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fq8qs"] Feb 24 15:04:41 crc kubenswrapper[4982]: I0224 15:04:41.156256 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" path="/var/lib/kubelet/pods/f0dd83ed-46a8-45df-b75c-1e4d74939bee/volumes" Feb 24 15:04:45 crc kubenswrapper[4982]: I0224 15:04:45.617580 4982 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 24 15:04:45 crc kubenswrapper[4982]: I0224 15:04:45.617963 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f4e9d226-da8e-46e8-b378-5aafba527e2c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.628886 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d5hkt"] Feb 24 15:04:51 crc kubenswrapper[4982]: E0224 15:04:51.629911 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerName="extract-content" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.629931 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerName="extract-content" Feb 24 15:04:51 crc kubenswrapper[4982]: E0224 15:04:51.629974 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerName="registry-server" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.629985 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerName="registry-server" Feb 24 15:04:51 crc kubenswrapper[4982]: E0224 15:04:51.630009 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerName="extract-utilities" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.630020 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerName="extract-utilities" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.630204 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0dd83ed-46a8-45df-b75c-1e4d74939bee" containerName="registry-server" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.631851 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.647098 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5hkt"] Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.812832 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-catalog-content\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.812928 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfbg\" (UniqueName: \"kubernetes.io/projected/9dff3a0c-0671-4837-b4f9-c52a889c440b-kube-api-access-zqfbg\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.812972 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-utilities\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.915049 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-catalog-content\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.915149 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqfbg\" (UniqueName: \"kubernetes.io/projected/9dff3a0c-0671-4837-b4f9-c52a889c440b-kube-api-access-zqfbg\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.915184 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-utilities\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.915629 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-catalog-content\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.915886 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-utilities\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.942217 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqfbg\" (UniqueName: \"kubernetes.io/projected/9dff3a0c-0671-4837-b4f9-c52a889c440b-kube-api-access-zqfbg\") pod \"redhat-marketplace-d5hkt\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:51 crc kubenswrapper[4982]: I0224 15:04:51.958236 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:04:52 crc kubenswrapper[4982]: I0224 15:04:52.399550 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5hkt"] Feb 24 15:04:53 crc kubenswrapper[4982]: I0224 15:04:53.122839 4982 generic.go:334] "Generic (PLEG): container finished" podID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerID="697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e" exitCode=0 Feb 24 15:04:53 crc kubenswrapper[4982]: I0224 15:04:53.123088 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5hkt" event={"ID":"9dff3a0c-0671-4837-b4f9-c52a889c440b","Type":"ContainerDied","Data":"697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e"} Feb 24 15:04:53 crc kubenswrapper[4982]: I0224 15:04:53.123115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5hkt" event={"ID":"9dff3a0c-0671-4837-b4f9-c52a889c440b","Type":"ContainerStarted","Data":"5cefb6eada73158698cc2f58883c3c6a93b1cceaa0baeaeb9eb62338f6a1dea9"} Feb 24 15:04:54 crc kubenswrapper[4982]: I0224 15:04:54.139616 4982 generic.go:334] "Generic (PLEG): container finished" podID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerID="4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52" exitCode=0 Feb 24 15:04:54 crc kubenswrapper[4982]: I0224 15:04:54.139711 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5hkt" event={"ID":"9dff3a0c-0671-4837-b4f9-c52a889c440b","Type":"ContainerDied","Data":"4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52"} Feb 24 15:04:55 crc kubenswrapper[4982]: I0224 15:04:55.157892 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5hkt" event={"ID":"9dff3a0c-0671-4837-b4f9-c52a889c440b","Type":"ContainerStarted","Data":"50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4"} Feb 24 15:04:55 crc kubenswrapper[4982]: I0224 15:04:55.172528 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d5hkt" podStartSLOduration=2.522895782 podStartE2EDuration="4.172489815s" podCreationTimestamp="2026-02-24 15:04:51 +0000 UTC" firstStartedPulling="2026-02-24 15:04:53.125250781 +0000 UTC m=+954.744309284" lastFinishedPulling="2026-02-24 15:04:54.774844824 +0000 UTC m=+956.393903317" observedRunningTime="2026-02-24 15:04:55.167040758 +0000 UTC m=+956.786099271" watchObservedRunningTime="2026-02-24 15:04:55.172489815 +0000 UTC m=+956.791548308" Feb 24 15:04:55 crc kubenswrapper[4982]: I0224 15:04:55.621845 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 24 15:05:01 crc kubenswrapper[4982]: I0224 15:05:01.959030 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:05:01 crc kubenswrapper[4982]: I0224 15:05:01.959713 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:05:02 crc kubenswrapper[4982]: I0224 15:05:02.019088 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:05:02 crc kubenswrapper[4982]: I0224 15:05:02.283772 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:05:02 crc kubenswrapper[4982]: I0224 15:05:02.343996 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5hkt"] Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.224549 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d5hkt" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerName="registry-server" containerID="cri-o://50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4" gracePeriod=2 Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.644494 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.741537 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-utilities\") pod \"9dff3a0c-0671-4837-b4f9-c52a889c440b\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.741667 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqfbg\" (UniqueName: \"kubernetes.io/projected/9dff3a0c-0671-4837-b4f9-c52a889c440b-kube-api-access-zqfbg\") pod \"9dff3a0c-0671-4837-b4f9-c52a889c440b\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.741700 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-catalog-content\") pod \"9dff3a0c-0671-4837-b4f9-c52a889c440b\" (UID: \"9dff3a0c-0671-4837-b4f9-c52a889c440b\") " Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.742429 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-utilities" (OuterVolumeSpecName: "utilities") pod "9dff3a0c-0671-4837-b4f9-c52a889c440b" (UID: "9dff3a0c-0671-4837-b4f9-c52a889c440b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.748288 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dff3a0c-0671-4837-b4f9-c52a889c440b-kube-api-access-zqfbg" (OuterVolumeSpecName: "kube-api-access-zqfbg") pod "9dff3a0c-0671-4837-b4f9-c52a889c440b" (UID: "9dff3a0c-0671-4837-b4f9-c52a889c440b"). InnerVolumeSpecName "kube-api-access-zqfbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.763042 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dff3a0c-0671-4837-b4f9-c52a889c440b" (UID: "9dff3a0c-0671-4837-b4f9-c52a889c440b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.844267 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.844322 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqfbg\" (UniqueName: \"kubernetes.io/projected/9dff3a0c-0671-4837-b4f9-c52a889c440b-kube-api-access-zqfbg\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:04 crc kubenswrapper[4982]: I0224 15:05:04.844345 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dff3a0c-0671-4837-b4f9-c52a889c440b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.235759 4982 generic.go:334] "Generic (PLEG): container finished" podID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerID="50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4" exitCode=0 Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.235832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5hkt" event={"ID":"9dff3a0c-0671-4837-b4f9-c52a889c440b","Type":"ContainerDied","Data":"50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4"} Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.236133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d5hkt" event={"ID":"9dff3a0c-0671-4837-b4f9-c52a889c440b","Type":"ContainerDied","Data":"5cefb6eada73158698cc2f58883c3c6a93b1cceaa0baeaeb9eb62338f6a1dea9"} Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.236160 4982 scope.go:117] "RemoveContainer" containerID="50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.235861 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d5hkt" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.274680 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5hkt"] Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.274756 4982 scope.go:117] "RemoveContainer" containerID="4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.282800 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d5hkt"] Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.300085 4982 scope.go:117] "RemoveContainer" containerID="697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.335892 4982 scope.go:117] "RemoveContainer" containerID="50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4" Feb 24 15:05:05 crc kubenswrapper[4982]: E0224 15:05:05.336618 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4\": container with ID starting with 50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4 not found: ID does not exist" containerID="50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.336686 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4"} err="failed to get container status \"50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4\": rpc error: code = NotFound desc = could not find container \"50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4\": container with ID starting with 50a5b4e3b48aeeeb24140f4868b3151694b4c6571736a33c4a843267c95b00f4 not found: ID does not exist" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.336725 4982 scope.go:117] "RemoveContainer" containerID="4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52" Feb 24 15:05:05 crc kubenswrapper[4982]: E0224 15:05:05.337407 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52\": container with ID starting with 4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52 not found: ID does not exist" containerID="4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.337466 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52"} err="failed to get container status \"4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52\": rpc error: code = NotFound desc = could not find container \"4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52\": container with ID starting with 4fdd3fe212140ea7b63e5f8bd0a7ceea2c537c61cb425223860fd5f4ddc06b52 not found: ID does not exist" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.337528 4982 scope.go:117] "RemoveContainer" containerID="697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e" Feb 24 15:05:05 crc kubenswrapper[4982]: E0224 15:05:05.338066 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e\": container with ID starting with 697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e not found: ID does not exist" containerID="697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e" Feb 24 15:05:05 crc kubenswrapper[4982]: I0224 15:05:05.338138 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e"} err="failed to get container status \"697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e\": rpc error: code = NotFound desc = could not find container \"697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e\": container with ID starting with 697191afe007e377f689ae9e19be62cd4f8a546aff39a4d929d437accd9e949e not found: ID does not exist" Feb 24 15:05:07 crc kubenswrapper[4982]: I0224 15:05:07.160955 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" path="/var/lib/kubelet/pods/9dff3a0c-0671-4837-b4f9-c52a889c440b/volumes" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.024401 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rsvq8"] Feb 24 15:05:11 crc kubenswrapper[4982]: E0224 15:05:11.025017 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerName="extract-utilities" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.025034 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerName="extract-utilities" Feb 24 15:05:11 crc kubenswrapper[4982]: E0224 15:05:11.025063 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerName="extract-content" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.025072 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerName="extract-content" Feb 24 15:05:11 crc kubenswrapper[4982]: E0224 15:05:11.025085 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerName="registry-server" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.025093 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerName="registry-server" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.025277 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dff3a0c-0671-4837-b4f9-c52a889c440b" containerName="registry-server" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.026477 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.092129 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rsvq8"] Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.154904 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ptj7\" (UniqueName: \"kubernetes.io/projected/fef86343-cadb-4d05-9729-30cf6eb9ed7d-kube-api-access-5ptj7\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.154983 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-catalog-content\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.155050 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-utilities\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.256874 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-utilities\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.257032 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ptj7\" (UniqueName: \"kubernetes.io/projected/fef86343-cadb-4d05-9729-30cf6eb9ed7d-kube-api-access-5ptj7\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.257102 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-catalog-content\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.257381 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-utilities\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.257783 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-catalog-content\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.286212 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ptj7\" (UniqueName: \"kubernetes.io/projected/fef86343-cadb-4d05-9729-30cf6eb9ed7d-kube-api-access-5ptj7\") pod \"certified-operators-rsvq8\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.342946 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:11 crc kubenswrapper[4982]: I0224 15:05:11.825304 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rsvq8"] Feb 24 15:05:12 crc kubenswrapper[4982]: I0224 15:05:12.296806 4982 generic.go:334] "Generic (PLEG): container finished" podID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerID="56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656" exitCode=0 Feb 24 15:05:12 crc kubenswrapper[4982]: I0224 15:05:12.296875 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvq8" event={"ID":"fef86343-cadb-4d05-9729-30cf6eb9ed7d","Type":"ContainerDied","Data":"56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656"} Feb 24 15:05:12 crc kubenswrapper[4982]: I0224 15:05:12.297179 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvq8" event={"ID":"fef86343-cadb-4d05-9729-30cf6eb9ed7d","Type":"ContainerStarted","Data":"8d2ab6f026203a076e9588802df48c0bd1d9161f12b7160efeb2c854105506b2"} Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.060004 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-kdjxn"] Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.061796 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.066004 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.066228 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.066711 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.066834 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-b928f" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.066973 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.069694 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-kdjxn"] Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.073610 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.115752 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-kdjxn"] Feb 24 15:05:13 crc kubenswrapper[4982]: E0224 15:05:13.116447 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-z2v26 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-z2v26 metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-kdjxn" podUID="8ec9265d-28f8-4f48-b1e7-5746dcf83e02" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.184709 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config-openshift-service-cacrt\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.184758 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.184959 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-entrypoint\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.185059 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2v26\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-kube-api-access-z2v26\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.185089 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-datadir\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.185200 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-tmp\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.185271 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-token\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.185408 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-trusted-ca\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.185450 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.185473 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-sa-token\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.185562 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-metrics\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287268 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-tmp\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287354 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-token\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287413 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-trusted-ca\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287456 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287483 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-sa-token\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287541 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-metrics\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287589 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config-openshift-service-cacrt\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287631 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287716 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-entrypoint\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287763 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2v26\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-kube-api-access-z2v26\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287785 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-datadir\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.287869 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-datadir\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: E0224 15:05:13.288122 4982 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 24 15:05:13 crc kubenswrapper[4982]: E0224 15:05:13.288252 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver podName:8ec9265d-28f8-4f48-b1e7-5746dcf83e02 nodeName:}" failed. No retries permitted until 2026-02-24 15:05:13.788230779 +0000 UTC m=+975.407289272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver") pod "collector-kdjxn" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02") : secret "collector-syslog-receiver" not found Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.288651 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-trusted-ca\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.288941 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-entrypoint\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.289108 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config-openshift-service-cacrt\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.289287 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.293337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-metrics\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.293567 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-tmp\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.305064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-token\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.306642 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvq8" event={"ID":"fef86343-cadb-4d05-9729-30cf6eb9ed7d","Type":"ContainerStarted","Data":"5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99"} Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.306669 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.308120 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2v26\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-kube-api-access-z2v26\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.313885 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-sa-token\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.346144 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490605 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-tmp\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490647 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config-openshift-service-cacrt\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490673 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-token\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490748 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-datadir\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490766 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490784 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-trusted-ca\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490798 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-entrypoint\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490827 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-sa-token\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490860 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-metrics\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.490895 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2v26\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-kube-api-access-z2v26\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.491109 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.491332 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-datadir" (OuterVolumeSpecName: "datadir") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.491629 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.491882 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config" (OuterVolumeSpecName: "config") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.492262 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.494460 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-sa-token" (OuterVolumeSpecName: "sa-token") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.494927 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-kube-api-access-z2v26" (OuterVolumeSpecName: "kube-api-access-z2v26") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "kube-api-access-z2v26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.495088 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-metrics" (OuterVolumeSpecName: "metrics") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.495632 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-token" (OuterVolumeSpecName: "collector-token") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.505737 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-tmp" (OuterVolumeSpecName: "tmp") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592455 4982 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-tmp\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592534 4982 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592558 4982 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-token\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592579 4982 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-datadir\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592596 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592613 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592630 4982 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592646 4982 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592663 4982 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.592680 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2v26\" (UniqueName: \"kubernetes.io/projected/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-kube-api-access-z2v26\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.796127 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.799963 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver\") pod \"collector-kdjxn\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " pod="openshift-logging/collector-kdjxn" Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.897322 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver\") pod \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\" (UID: \"8ec9265d-28f8-4f48-b1e7-5746dcf83e02\") " Feb 24 15:05:13 crc kubenswrapper[4982]: I0224 15:05:13.900064 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "8ec9265d-28f8-4f48-b1e7-5746dcf83e02" (UID: "8ec9265d-28f8-4f48-b1e7-5746dcf83e02"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.000008 4982 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8ec9265d-28f8-4f48-b1e7-5746dcf83e02-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.317976 4982 generic.go:334] "Generic (PLEG): container finished" podID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerID="5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99" exitCode=0 Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.318030 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvq8" event={"ID":"fef86343-cadb-4d05-9729-30cf6eb9ed7d","Type":"ContainerDied","Data":"5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99"} Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.318095 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kdjxn" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.392739 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-kdjxn"] Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.397751 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-kdjxn"] Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.413406 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-vt6tk"] Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.414555 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.417035 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.417184 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.417202 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.417295 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-b928f" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.418688 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.427426 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.429343 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-vt6tk"] Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506452 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fd36302e-5b75-4a73-ae39-e4a8e58f2682-sa-token\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506563 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-collector-syslog-receiver\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506589 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-trusted-ca\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506622 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-metrics\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506653 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fd36302e-5b75-4a73-ae39-e4a8e58f2682-tmp\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506680 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-config-openshift-service-cacrt\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506701 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-config\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506736 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zfc\" (UniqueName: \"kubernetes.io/projected/fd36302e-5b75-4a73-ae39-e4a8e58f2682-kube-api-access-98zfc\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506755 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-collector-token\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.506991 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fd36302e-5b75-4a73-ae39-e4a8e58f2682-datadir\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.507061 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-entrypoint\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609561 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-collector-syslog-receiver\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609621 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-trusted-ca\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609665 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-metrics\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609696 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fd36302e-5b75-4a73-ae39-e4a8e58f2682-tmp\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609722 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-config-openshift-service-cacrt\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609742 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-config\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609786 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98zfc\" (UniqueName: \"kubernetes.io/projected/fd36302e-5b75-4a73-ae39-e4a8e58f2682-kube-api-access-98zfc\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609802 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-collector-token\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609845 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fd36302e-5b75-4a73-ae39-e4a8e58f2682-datadir\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609874 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-entrypoint\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.609900 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fd36302e-5b75-4a73-ae39-e4a8e58f2682-sa-token\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.610523 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fd36302e-5b75-4a73-ae39-e4a8e58f2682-datadir\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.611343 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-config\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.612824 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-entrypoint\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.613228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-config-openshift-service-cacrt\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.615371 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd36302e-5b75-4a73-ae39-e4a8e58f2682-trusted-ca\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.618181 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-collector-syslog-receiver\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.618349 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-collector-token\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.618628 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fd36302e-5b75-4a73-ae39-e4a8e58f2682-metrics\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.626788 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fd36302e-5b75-4a73-ae39-e4a8e58f2682-tmp\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.629422 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fd36302e-5b75-4a73-ae39-e4a8e58f2682-sa-token\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.633045 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98zfc\" (UniqueName: \"kubernetes.io/projected/fd36302e-5b75-4a73-ae39-e4a8e58f2682-kube-api-access-98zfc\") pod \"collector-vt6tk\" (UID: \"fd36302e-5b75-4a73-ae39-e4a8e58f2682\") " pod="openshift-logging/collector-vt6tk" Feb 24 15:05:14 crc kubenswrapper[4982]: I0224 15:05:14.738059 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-vt6tk" Feb 24 15:05:15 crc kubenswrapper[4982]: I0224 15:05:15.157864 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec9265d-28f8-4f48-b1e7-5746dcf83e02" path="/var/lib/kubelet/pods/8ec9265d-28f8-4f48-b1e7-5746dcf83e02/volumes" Feb 24 15:05:15 crc kubenswrapper[4982]: I0224 15:05:15.158547 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-vt6tk"] Feb 24 15:05:15 crc kubenswrapper[4982]: I0224 15:05:15.326434 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-vt6tk" event={"ID":"fd36302e-5b75-4a73-ae39-e4a8e58f2682","Type":"ContainerStarted","Data":"2a95a556aecc079edfce1b5b64fddb602806458971b4ea68cbe3424d77a72fa8"} Feb 24 15:05:15 crc kubenswrapper[4982]: I0224 15:05:15.329168 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvq8" event={"ID":"fef86343-cadb-4d05-9729-30cf6eb9ed7d","Type":"ContainerStarted","Data":"175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c"} Feb 24 15:05:15 crc kubenswrapper[4982]: I0224 15:05:15.350243 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rsvq8" podStartSLOduration=2.823504318 podStartE2EDuration="5.350223003s" podCreationTimestamp="2026-02-24 15:05:10 +0000 UTC" firstStartedPulling="2026-02-24 15:05:12.298445098 +0000 UTC m=+973.917503591" lastFinishedPulling="2026-02-24 15:05:14.825163783 +0000 UTC m=+976.444222276" observedRunningTime="2026-02-24 15:05:15.350034788 +0000 UTC m=+976.969093301" watchObservedRunningTime="2026-02-24 15:05:15.350223003 +0000 UTC m=+976.969281496" Feb 24 15:05:21 crc kubenswrapper[4982]: I0224 15:05:21.343978 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:21 crc kubenswrapper[4982]: I0224 15:05:21.344787 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:21 crc kubenswrapper[4982]: I0224 15:05:21.415994 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:22 crc kubenswrapper[4982]: I0224 15:05:22.398104 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-vt6tk" event={"ID":"fd36302e-5b75-4a73-ae39-e4a8e58f2682","Type":"ContainerStarted","Data":"020396ea58960bac8ac0708502c5ad5c60f0cf21dcea68a93bf99b368a1c9b84"} Feb 24 15:05:22 crc kubenswrapper[4982]: I0224 15:05:22.434953 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-vt6tk" podStartSLOduration=1.866111332 podStartE2EDuration="8.434937817s" podCreationTimestamp="2026-02-24 15:05:14 +0000 UTC" firstStartedPulling="2026-02-24 15:05:15.155584997 +0000 UTC m=+976.774643490" lastFinishedPulling="2026-02-24 15:05:21.724411482 +0000 UTC m=+983.343469975" observedRunningTime="2026-02-24 15:05:22.432911892 +0000 UTC m=+984.051970385" watchObservedRunningTime="2026-02-24 15:05:22.434937817 +0000 UTC m=+984.053996310" Feb 24 15:05:22 crc kubenswrapper[4982]: I0224 15:05:22.483701 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:22 crc kubenswrapper[4982]: I0224 15:05:22.655581 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rsvq8"] Feb 24 15:05:24 crc kubenswrapper[4982]: I0224 15:05:24.418681 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rsvq8" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerName="registry-server" containerID="cri-o://175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c" gracePeriod=2 Feb 24 15:05:24 crc kubenswrapper[4982]: I0224 15:05:24.873268 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.009158 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-catalog-content\") pod \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.009253 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ptj7\" (UniqueName: \"kubernetes.io/projected/fef86343-cadb-4d05-9729-30cf6eb9ed7d-kube-api-access-5ptj7\") pod \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.009444 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-utilities\") pod \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\" (UID: \"fef86343-cadb-4d05-9729-30cf6eb9ed7d\") " Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.010178 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-utilities" (OuterVolumeSpecName: "utilities") pod "fef86343-cadb-4d05-9729-30cf6eb9ed7d" (UID: "fef86343-cadb-4d05-9729-30cf6eb9ed7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.023692 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef86343-cadb-4d05-9729-30cf6eb9ed7d-kube-api-access-5ptj7" (OuterVolumeSpecName: "kube-api-access-5ptj7") pod "fef86343-cadb-4d05-9729-30cf6eb9ed7d" (UID: "fef86343-cadb-4d05-9729-30cf6eb9ed7d"). InnerVolumeSpecName "kube-api-access-5ptj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.085703 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fef86343-cadb-4d05-9729-30cf6eb9ed7d" (UID: "fef86343-cadb-4d05-9729-30cf6eb9ed7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.111446 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.111492 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef86343-cadb-4d05-9729-30cf6eb9ed7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.111525 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ptj7\" (UniqueName: \"kubernetes.io/projected/fef86343-cadb-4d05-9729-30cf6eb9ed7d-kube-api-access-5ptj7\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.428765 4982 generic.go:334] "Generic (PLEG): container finished" podID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerID="175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c" exitCode=0 Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.428813 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvq8" event={"ID":"fef86343-cadb-4d05-9729-30cf6eb9ed7d","Type":"ContainerDied","Data":"175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c"} Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.428825 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsvq8" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.428848 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvq8" event={"ID":"fef86343-cadb-4d05-9729-30cf6eb9ed7d","Type":"ContainerDied","Data":"8d2ab6f026203a076e9588802df48c0bd1d9161f12b7160efeb2c854105506b2"} Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.428868 4982 scope.go:117] "RemoveContainer" containerID="175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.449747 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rsvq8"] Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.457051 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rsvq8"] Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.470554 4982 scope.go:117] "RemoveContainer" containerID="5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.492109 4982 scope.go:117] "RemoveContainer" containerID="56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.522064 4982 scope.go:117] "RemoveContainer" containerID="175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c" Feb 24 15:05:25 crc kubenswrapper[4982]: E0224 15:05:25.522573 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c\": container with ID starting with 175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c not found: ID does not exist" containerID="175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.522613 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c"} err="failed to get container status \"175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c\": rpc error: code = NotFound desc = could not find container \"175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c\": container with ID starting with 175d849bc362560e90a0ec38509f5f15a4daa2e13d81b313bb5c2a1b31ed6f0c not found: ID does not exist" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.522639 4982 scope.go:117] "RemoveContainer" containerID="5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99" Feb 24 15:05:25 crc kubenswrapper[4982]: E0224 15:05:25.522882 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99\": container with ID starting with 5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99 not found: ID does not exist" containerID="5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.522907 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99"} err="failed to get container status \"5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99\": rpc error: code = NotFound desc = could not find container \"5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99\": container with ID starting with 5c0981ef09e04a98acd46939a096bf6446107916f8c0a2260c076c7722aabb99 not found: ID does not exist" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.522925 4982 scope.go:117] "RemoveContainer" containerID="56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656" Feb 24 15:05:25 crc kubenswrapper[4982]: E0224 15:05:25.523243 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656\": container with ID starting with 56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656 not found: ID does not exist" containerID="56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656" Feb 24 15:05:25 crc kubenswrapper[4982]: I0224 15:05:25.523268 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656"} err="failed to get container status \"56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656\": rpc error: code = NotFound desc = could not find container \"56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656\": container with ID starting with 56dc0a3fdd360dacafde1598de28a0937b1f7e70af47814cd8dcd40482c19656 not found: ID does not exist" Feb 24 15:05:27 crc kubenswrapper[4982]: I0224 15:05:27.159591 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" path="/var/lib/kubelet/pods/fef86343-cadb-4d05-9729-30cf6eb9ed7d/volumes" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.324351 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm"] Feb 24 15:05:52 crc kubenswrapper[4982]: E0224 15:05:52.325073 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerName="extract-utilities" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.325085 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerName="extract-utilities" Feb 24 15:05:52 crc kubenswrapper[4982]: E0224 15:05:52.325104 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerName="registry-server" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.325110 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerName="registry-server" Feb 24 15:05:52 crc kubenswrapper[4982]: E0224 15:05:52.325121 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerName="extract-content" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.325127 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerName="extract-content" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.325291 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef86343-cadb-4d05-9729-30cf6eb9ed7d" containerName="registry-server" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.326422 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.328477 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.331838 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm"] Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.502752 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.502962 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.503060 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwlj\" (UniqueName: \"kubernetes.io/projected/44ef5dc7-0127-4741-bbbb-afa5033ede1a-kube-api-access-vkwlj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.604754 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.605131 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.605266 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkwlj\" (UniqueName: \"kubernetes.io/projected/44ef5dc7-0127-4741-bbbb-afa5033ede1a-kube-api-access-vkwlj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.605739 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.605769 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.627321 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkwlj\" (UniqueName: \"kubernetes.io/projected/44ef5dc7-0127-4741-bbbb-afa5033ede1a-kube-api-access-vkwlj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:52 crc kubenswrapper[4982]: I0224 15:05:52.657794 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:53 crc kubenswrapper[4982]: I0224 15:05:53.060929 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm"] Feb 24 15:05:53 crc kubenswrapper[4982]: I0224 15:05:53.672264 4982 generic.go:334] "Generic (PLEG): container finished" podID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerID="844ae8fac0fa4c6294a5551278f1bf4c5f43c74f5a7f491034be5eda6758fdfb" exitCode=0 Feb 24 15:05:53 crc kubenswrapper[4982]: I0224 15:05:53.672311 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" event={"ID":"44ef5dc7-0127-4741-bbbb-afa5033ede1a","Type":"ContainerDied","Data":"844ae8fac0fa4c6294a5551278f1bf4c5f43c74f5a7f491034be5eda6758fdfb"} Feb 24 15:05:53 crc kubenswrapper[4982]: I0224 15:05:53.672551 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" event={"ID":"44ef5dc7-0127-4741-bbbb-afa5033ede1a","Type":"ContainerStarted","Data":"ff27262875d9a8fd6067ba9741dcc32e8b1f16a1962475e1ca69881638f8f54a"} Feb 24 15:05:55 crc kubenswrapper[4982]: I0224 15:05:55.690020 4982 generic.go:334] "Generic (PLEG): container finished" podID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerID="d2554168327360a09837f6ab25f6e0a2229968bb576c87816caf82a1eafec8ca" exitCode=0 Feb 24 15:05:55 crc kubenswrapper[4982]: I0224 15:05:55.690087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" event={"ID":"44ef5dc7-0127-4741-bbbb-afa5033ede1a","Type":"ContainerDied","Data":"d2554168327360a09837f6ab25f6e0a2229968bb576c87816caf82a1eafec8ca"} Feb 24 15:05:56 crc kubenswrapper[4982]: I0224 15:05:56.702492 4982 generic.go:334] "Generic (PLEG): container finished" podID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerID="39d10494b54cf0a940ef0b849a49d22c25e9def89d5ee0ada9d754fc6d3f8d40" exitCode=0 Feb 24 15:05:56 crc kubenswrapper[4982]: I0224 15:05:56.702589 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" event={"ID":"44ef5dc7-0127-4741-bbbb-afa5033ede1a","Type":"ContainerDied","Data":"39d10494b54cf0a940ef0b849a49d22c25e9def89d5ee0ada9d754fc6d3f8d40"} Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.011370 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.088844 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-util\") pod \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.088891 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkwlj\" (UniqueName: \"kubernetes.io/projected/44ef5dc7-0127-4741-bbbb-afa5033ede1a-kube-api-access-vkwlj\") pod \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.088968 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-bundle\") pod \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\" (UID: \"44ef5dc7-0127-4741-bbbb-afa5033ede1a\") " Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.089425 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-bundle" (OuterVolumeSpecName: "bundle") pod "44ef5dc7-0127-4741-bbbb-afa5033ede1a" (UID: "44ef5dc7-0127-4741-bbbb-afa5033ede1a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.093827 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ef5dc7-0127-4741-bbbb-afa5033ede1a-kube-api-access-vkwlj" (OuterVolumeSpecName: "kube-api-access-vkwlj") pod "44ef5dc7-0127-4741-bbbb-afa5033ede1a" (UID: "44ef5dc7-0127-4741-bbbb-afa5033ede1a"). InnerVolumeSpecName "kube-api-access-vkwlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.103429 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-util" (OuterVolumeSpecName: "util") pod "44ef5dc7-0127-4741-bbbb-afa5033ede1a" (UID: "44ef5dc7-0127-4741-bbbb-afa5033ede1a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.190866 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.190908 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44ef5dc7-0127-4741-bbbb-afa5033ede1a-util\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.190928 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkwlj\" (UniqueName: \"kubernetes.io/projected/44ef5dc7-0127-4741-bbbb-afa5033ede1a-kube-api-access-vkwlj\") on node \"crc\" DevicePath \"\"" Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.720196 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" event={"ID":"44ef5dc7-0127-4741-bbbb-afa5033ede1a","Type":"ContainerDied","Data":"ff27262875d9a8fd6067ba9741dcc32e8b1f16a1962475e1ca69881638f8f54a"} Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.720257 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff27262875d9a8fd6067ba9741dcc32e8b1f16a1962475e1ca69881638f8f54a" Feb 24 15:05:58 crc kubenswrapper[4982]: I0224 15:05:58.720275 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.141715 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532426-9jmrp"] Feb 24 15:06:00 crc kubenswrapper[4982]: E0224 15:06:00.142443 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerName="pull" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.142463 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerName="pull" Feb 24 15:06:00 crc kubenswrapper[4982]: E0224 15:06:00.142526 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerName="util" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.142538 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerName="util" Feb 24 15:06:00 crc kubenswrapper[4982]: E0224 15:06:00.142554 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerName="extract" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.142567 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerName="extract" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.142791 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ef5dc7-0127-4741-bbbb-afa5033ede1a" containerName="extract" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.143676 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532426-9jmrp" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.146757 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.147062 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.147535 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.153744 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532426-9jmrp"] Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.221369 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkqd\" (UniqueName: \"kubernetes.io/projected/c115a13d-4dd0-433b-bb33-5e84dea1d390-kube-api-access-bkkqd\") pod \"auto-csr-approver-29532426-9jmrp\" (UID: \"c115a13d-4dd0-433b-bb33-5e84dea1d390\") " pod="openshift-infra/auto-csr-approver-29532426-9jmrp" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.323340 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkkqd\" (UniqueName: \"kubernetes.io/projected/c115a13d-4dd0-433b-bb33-5e84dea1d390-kube-api-access-bkkqd\") pod \"auto-csr-approver-29532426-9jmrp\" (UID: \"c115a13d-4dd0-433b-bb33-5e84dea1d390\") " pod="openshift-infra/auto-csr-approver-29532426-9jmrp" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.344706 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkkqd\" (UniqueName: \"kubernetes.io/projected/c115a13d-4dd0-433b-bb33-5e84dea1d390-kube-api-access-bkkqd\") pod \"auto-csr-approver-29532426-9jmrp\" (UID: \"c115a13d-4dd0-433b-bb33-5e84dea1d390\") " pod="openshift-infra/auto-csr-approver-29532426-9jmrp" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.461350 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532426-9jmrp" Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.662982 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532426-9jmrp"] Feb 24 15:06:00 crc kubenswrapper[4982]: I0224 15:06:00.746440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532426-9jmrp" event={"ID":"c115a13d-4dd0-433b-bb33-5e84dea1d390","Type":"ContainerStarted","Data":"0605fe7e84191843a02db47e3a3b0e834d68f36e2f4f2b2b6969a9fb804c6e1a"} Feb 24 15:06:02 crc kubenswrapper[4982]: I0224 15:06:02.777741 4982 generic.go:334] "Generic (PLEG): container finished" podID="c115a13d-4dd0-433b-bb33-5e84dea1d390" containerID="c630c0f8ac0051fc683792e77bb9382a58b26ecc50cfd3f715dbd9fdf7dd379e" exitCode=0 Feb 24 15:06:02 crc kubenswrapper[4982]: I0224 15:06:02.778544 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532426-9jmrp" event={"ID":"c115a13d-4dd0-433b-bb33-5e84dea1d390","Type":"ContainerDied","Data":"c630c0f8ac0051fc683792e77bb9382a58b26ecc50cfd3f715dbd9fdf7dd379e"} Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.041809 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lj9wc"] Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.043453 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lj9wc" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.046605 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.046991 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.052339 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zk4sq" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.061808 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lj9wc"] Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.187404 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27n7\" (UniqueName: \"kubernetes.io/projected/9d9e0bf3-ed22-416c-b672-8df43d3014c0-kube-api-access-x27n7\") pod \"nmstate-operator-694c9596b7-lj9wc\" (UID: \"9d9e0bf3-ed22-416c-b672-8df43d3014c0\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lj9wc" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.233490 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532426-9jmrp" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.289543 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkkqd\" (UniqueName: \"kubernetes.io/projected/c115a13d-4dd0-433b-bb33-5e84dea1d390-kube-api-access-bkkqd\") pod \"c115a13d-4dd0-433b-bb33-5e84dea1d390\" (UID: \"c115a13d-4dd0-433b-bb33-5e84dea1d390\") " Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.290057 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27n7\" (UniqueName: \"kubernetes.io/projected/9d9e0bf3-ed22-416c-b672-8df43d3014c0-kube-api-access-x27n7\") pod \"nmstate-operator-694c9596b7-lj9wc\" (UID: \"9d9e0bf3-ed22-416c-b672-8df43d3014c0\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lj9wc" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.296589 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c115a13d-4dd0-433b-bb33-5e84dea1d390-kube-api-access-bkkqd" (OuterVolumeSpecName: "kube-api-access-bkkqd") pod "c115a13d-4dd0-433b-bb33-5e84dea1d390" (UID: "c115a13d-4dd0-433b-bb33-5e84dea1d390"). InnerVolumeSpecName "kube-api-access-bkkqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.322473 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27n7\" (UniqueName: \"kubernetes.io/projected/9d9e0bf3-ed22-416c-b672-8df43d3014c0-kube-api-access-x27n7\") pod \"nmstate-operator-694c9596b7-lj9wc\" (UID: \"9d9e0bf3-ed22-416c-b672-8df43d3014c0\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lj9wc" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.364138 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lj9wc" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.391965 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkkqd\" (UniqueName: \"kubernetes.io/projected/c115a13d-4dd0-433b-bb33-5e84dea1d390-kube-api-access-bkkqd\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.788125 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lj9wc"] Feb 24 15:06:04 crc kubenswrapper[4982]: W0224 15:06:04.791100 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d9e0bf3_ed22_416c_b672_8df43d3014c0.slice/crio-895b0e8ebd79515f764b91cc51d2e43e4a7fe814107fd0d5bc31db20338e1c92 WatchSource:0}: Error finding container 895b0e8ebd79515f764b91cc51d2e43e4a7fe814107fd0d5bc31db20338e1c92: Status 404 returned error can't find the container with id 895b0e8ebd79515f764b91cc51d2e43e4a7fe814107fd0d5bc31db20338e1c92 Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.795707 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532426-9jmrp" event={"ID":"c115a13d-4dd0-433b-bb33-5e84dea1d390","Type":"ContainerDied","Data":"0605fe7e84191843a02db47e3a3b0e834d68f36e2f4f2b2b6969a9fb804c6e1a"} Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.795759 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0605fe7e84191843a02db47e3a3b0e834d68f36e2f4f2b2b6969a9fb804c6e1a" Feb 24 15:06:04 crc kubenswrapper[4982]: I0224 15:06:04.795773 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532426-9jmrp" Feb 24 15:06:05 crc kubenswrapper[4982]: I0224 15:06:05.333938 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532420-xw6gk"] Feb 24 15:06:05 crc kubenswrapper[4982]: I0224 15:06:05.340575 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532420-xw6gk"] Feb 24 15:06:05 crc kubenswrapper[4982]: I0224 15:06:05.811084 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lj9wc" event={"ID":"9d9e0bf3-ed22-416c-b672-8df43d3014c0","Type":"ContainerStarted","Data":"895b0e8ebd79515f764b91cc51d2e43e4a7fe814107fd0d5bc31db20338e1c92"} Feb 24 15:06:07 crc kubenswrapper[4982]: I0224 15:06:07.156543 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d787834f-4c51-4859-baa1-ae0f6e91b1a6" path="/var/lib/kubelet/pods/d787834f-4c51-4859-baa1-ae0f6e91b1a6/volumes" Feb 24 15:06:07 crc kubenswrapper[4982]: I0224 15:06:07.828885 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lj9wc" event={"ID":"9d9e0bf3-ed22-416c-b672-8df43d3014c0","Type":"ContainerStarted","Data":"bba78c9f5249c373d23dcbb50f1d82dc837605e4eab688de545f028621f402e4"} Feb 24 15:06:07 crc kubenswrapper[4982]: I0224 15:06:07.846322 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-lj9wc" podStartSLOduration=1.486855989 podStartE2EDuration="3.846301936s" podCreationTimestamp="2026-02-24 15:06:04 +0000 UTC" firstStartedPulling="2026-02-24 15:06:04.795294863 +0000 UTC m=+1026.414353376" lastFinishedPulling="2026-02-24 15:06:07.15474082 +0000 UTC m=+1028.773799323" observedRunningTime="2026-02-24 15:06:07.84569087 +0000 UTC m=+1029.464749363" watchObservedRunningTime="2026-02-24 15:06:07.846301936 +0000 UTC m=+1029.465360429" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.469430 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q"] Feb 24 15:06:13 crc kubenswrapper[4982]: E0224 15:06:13.471789 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c115a13d-4dd0-433b-bb33-5e84dea1d390" containerName="oc" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.471921 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c115a13d-4dd0-433b-bb33-5e84dea1d390" containerName="oc" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.472219 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c115a13d-4dd0-433b-bb33-5e84dea1d390" containerName="oc" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.473545 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.475235 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pfdsc" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.487373 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq"] Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.488813 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.494268 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.498079 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q"] Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.506967 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8k954"] Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.507943 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.516570 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq"] Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.571272 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ef265381-af0c-4642-93cb-075344e3650c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vfvxq\" (UID: \"ef265381-af0c-4642-93cb-075344e3650c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.571350 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbfb\" (UniqueName: \"kubernetes.io/projected/42eaa5bc-682b-40c1-ace7-12acd0a45032-kube-api-access-7zbfb\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.571383 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-dbus-socket\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.571412 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdb9z\" (UniqueName: \"kubernetes.io/projected/ef265381-af0c-4642-93cb-075344e3650c-kube-api-access-mdb9z\") pod \"nmstate-webhook-866bcb46dc-vfvxq\" (UID: \"ef265381-af0c-4642-93cb-075344e3650c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.571440 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgl9f\" (UniqueName: \"kubernetes.io/projected/1e0416f6-ebc9-4a01-a69a-904aab8b4cbb-kube-api-access-qgl9f\") pod \"nmstate-metrics-58c85c668d-7rt8q\" (UID: \"1e0416f6-ebc9-4a01-a69a-904aab8b4cbb\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.571464 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-ovs-socket\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.571483 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-nmstate-lock\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.598475 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm"] Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.605264 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.611980 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.612252 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lbprz" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.612432 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.614618 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm"] Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673434 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-ovs-socket\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-nmstate-lock\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673563 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ef265381-af0c-4642-93cb-075344e3650c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vfvxq\" (UID: \"ef265381-af0c-4642-93cb-075344e3650c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673616 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7nsc\" (UniqueName: \"kubernetes.io/projected/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-kube-api-access-q7nsc\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673763 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbfb\" (UniqueName: \"kubernetes.io/projected/42eaa5bc-682b-40c1-ace7-12acd0a45032-kube-api-access-7zbfb\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673807 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-dbus-socket\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673856 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdb9z\" (UniqueName: \"kubernetes.io/projected/ef265381-af0c-4642-93cb-075344e3650c-kube-api-access-mdb9z\") pod \"nmstate-webhook-866bcb46dc-vfvxq\" (UID: \"ef265381-af0c-4642-93cb-075344e3650c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673881 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.673924 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgl9f\" (UniqueName: \"kubernetes.io/projected/1e0416f6-ebc9-4a01-a69a-904aab8b4cbb-kube-api-access-qgl9f\") pod \"nmstate-metrics-58c85c668d-7rt8q\" (UID: \"1e0416f6-ebc9-4a01-a69a-904aab8b4cbb\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.674224 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-ovs-socket\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.674279 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-nmstate-lock\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.674364 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/42eaa5bc-682b-40c1-ace7-12acd0a45032-dbus-socket\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.693708 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ef265381-af0c-4642-93cb-075344e3650c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vfvxq\" (UID: \"ef265381-af0c-4642-93cb-075344e3650c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.696189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgl9f\" (UniqueName: \"kubernetes.io/projected/1e0416f6-ebc9-4a01-a69a-904aab8b4cbb-kube-api-access-qgl9f\") pod \"nmstate-metrics-58c85c668d-7rt8q\" (UID: \"1e0416f6-ebc9-4a01-a69a-904aab8b4cbb\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.696885 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbfb\" (UniqueName: \"kubernetes.io/projected/42eaa5bc-682b-40c1-ace7-12acd0a45032-kube-api-access-7zbfb\") pod \"nmstate-handler-8k954\" (UID: \"42eaa5bc-682b-40c1-ace7-12acd0a45032\") " pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.702032 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdb9z\" (UniqueName: \"kubernetes.io/projected/ef265381-af0c-4642-93cb-075344e3650c-kube-api-access-mdb9z\") pod \"nmstate-webhook-866bcb46dc-vfvxq\" (UID: \"ef265381-af0c-4642-93cb-075344e3650c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.775580 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7nsc\" (UniqueName: \"kubernetes.io/projected/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-kube-api-access-q7nsc\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.775648 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.775704 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.776886 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.782009 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.791608 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7nsc\" (UniqueName: \"kubernetes.io/projected/f0fec11a-acd6-4eb3-9019-2ecdd41eccf3-kube-api-access-q7nsc\") pod \"nmstate-console-plugin-5c78fc5d65-kqblm\" (UID: \"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.799905 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.815750 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.831997 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.852404 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-544ccdb57f-qqrjk"] Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.853337 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.868255 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544ccdb57f-qqrjk"] Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.928895 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.978096 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mcdx\" (UniqueName: \"kubernetes.io/projected/a12a438e-edd4-4552-bfc2-0989f944b710-kube-api-access-8mcdx\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.978209 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-serving-cert\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.978281 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-oauth-serving-cert\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.979831 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-console-config\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.979901 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-oauth-config\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.980007 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-service-ca\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:13 crc kubenswrapper[4982]: I0224 15:06:13.980043 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-trusted-ca-bundle\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.081769 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-serving-cert\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.081831 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-oauth-serving-cert\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.081861 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-console-config\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.081898 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-oauth-config\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.081922 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-service-ca\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.081945 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-trusted-ca-bundle\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.082009 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mcdx\" (UniqueName: \"kubernetes.io/projected/a12a438e-edd4-4552-bfc2-0989f944b710-kube-api-access-8mcdx\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.083976 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-console-config\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.084314 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-oauth-serving-cert\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.084345 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-trusted-ca-bundle\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.085127 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-service-ca\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.089253 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-oauth-config\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.089252 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-serving-cert\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.103346 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mcdx\" (UniqueName: \"kubernetes.io/projected/a12a438e-edd4-4552-bfc2-0989f944b710-kube-api-access-8mcdx\") pod \"console-544ccdb57f-qqrjk\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.223655 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.358553 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q"] Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.430479 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm"] Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.441884 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq"] Feb 24 15:06:14 crc kubenswrapper[4982]: W0224 15:06:14.445181 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef265381_af0c_4642_93cb_075344e3650c.slice/crio-a8f3b4e9c4717ed4890e5d5cb7cfccc6fffaa8a86b0ef8107737f011012c8b9d WatchSource:0}: Error finding container a8f3b4e9c4717ed4890e5d5cb7cfccc6fffaa8a86b0ef8107737f011012c8b9d: Status 404 returned error can't find the container with id a8f3b4e9c4717ed4890e5d5cb7cfccc6fffaa8a86b0ef8107737f011012c8b9d Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.695938 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544ccdb57f-qqrjk"] Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.903348 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" event={"ID":"1e0416f6-ebc9-4a01-a69a-904aab8b4cbb","Type":"ContainerStarted","Data":"db0a8faef54203d5942b718c912089fb70b51d190e37fab6db70cf24c4d40046"} Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.905064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544ccdb57f-qqrjk" event={"ID":"a12a438e-edd4-4552-bfc2-0989f944b710","Type":"ContainerStarted","Data":"fcfadd48e7a3a6ae0ee1992816b4229db009aa90d29fc9eaf88332ee13d48464"} Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.905362 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544ccdb57f-qqrjk" event={"ID":"a12a438e-edd4-4552-bfc2-0989f944b710","Type":"ContainerStarted","Data":"1af8e6949a73afc476d7ae11d9b9c636f460de4b153f0c9de12e68ad764338ac"} Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.907924 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8k954" event={"ID":"42eaa5bc-682b-40c1-ace7-12acd0a45032","Type":"ContainerStarted","Data":"8a45f2837dac3e2c560b7b38cc5bbf9caea0f1a0e95c2e6b60655679554903b5"} Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.908131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" event={"ID":"ef265381-af0c-4642-93cb-075344e3650c","Type":"ContainerStarted","Data":"a8f3b4e9c4717ed4890e5d5cb7cfccc6fffaa8a86b0ef8107737f011012c8b9d"} Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.909200 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" event={"ID":"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3","Type":"ContainerStarted","Data":"84a62de4c097f2e4ac21b0843d09f6b02fd70fa2025f6360baeab186ea642182"} Feb 24 15:06:14 crc kubenswrapper[4982]: I0224 15:06:14.929331 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-544ccdb57f-qqrjk" podStartSLOduration=1.929308474 podStartE2EDuration="1.929308474s" podCreationTimestamp="2026-02-24 15:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:06:14.923892576 +0000 UTC m=+1036.542951089" watchObservedRunningTime="2026-02-24 15:06:14.929308474 +0000 UTC m=+1036.548367007" Feb 24 15:06:17 crc kubenswrapper[4982]: I0224 15:06:17.933722 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" event={"ID":"f0fec11a-acd6-4eb3-9019-2ecdd41eccf3","Type":"ContainerStarted","Data":"d84de26fff17c81b1b7af43bc6445e69a540ee9864e60bb742b32b5bfc974cfe"} Feb 24 15:06:17 crc kubenswrapper[4982]: I0224 15:06:17.935192 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" event={"ID":"1e0416f6-ebc9-4a01-a69a-904aab8b4cbb","Type":"ContainerStarted","Data":"42b7c23982f41a23252da8bb1e3719cbaefac82ab98cfd14a74393cdc66e7ecd"} Feb 24 15:06:17 crc kubenswrapper[4982]: I0224 15:06:17.937052 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8k954" event={"ID":"42eaa5bc-682b-40c1-ace7-12acd0a45032","Type":"ContainerStarted","Data":"30d3ba133cd229cdcf3158d4c9add06c0c9b07533c4490d191ba788d61026112"} Feb 24 15:06:17 crc kubenswrapper[4982]: I0224 15:06:17.937172 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:17 crc kubenswrapper[4982]: I0224 15:06:17.938529 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" event={"ID":"ef265381-af0c-4642-93cb-075344e3650c","Type":"ContainerStarted","Data":"c99d023a749960be5fe61df86fb85d19968d6f35c5cdae4e8adf3905268e7544"} Feb 24 15:06:17 crc kubenswrapper[4982]: I0224 15:06:17.938690 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:17 crc kubenswrapper[4982]: I0224 15:06:17.956681 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-kqblm" podStartSLOduration=2.203887742 podStartE2EDuration="4.956664291s" podCreationTimestamp="2026-02-24 15:06:13 +0000 UTC" firstStartedPulling="2026-02-24 15:06:14.437431921 +0000 UTC m=+1036.056490414" lastFinishedPulling="2026-02-24 15:06:17.19020847 +0000 UTC m=+1038.809266963" observedRunningTime="2026-02-24 15:06:17.948045436 +0000 UTC m=+1039.567104009" watchObservedRunningTime="2026-02-24 15:06:17.956664291 +0000 UTC m=+1039.575722784" Feb 24 15:06:17 crc kubenswrapper[4982]: I0224 15:06:17.973737 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" podStartSLOduration=2.211214723 podStartE2EDuration="4.973717417s" podCreationTimestamp="2026-02-24 15:06:13 +0000 UTC" firstStartedPulling="2026-02-24 15:06:14.45422713 +0000 UTC m=+1036.073285623" lastFinishedPulling="2026-02-24 15:06:17.216729824 +0000 UTC m=+1038.835788317" observedRunningTime="2026-02-24 15:06:17.971503387 +0000 UTC m=+1039.590561880" watchObservedRunningTime="2026-02-24 15:06:17.973717417 +0000 UTC m=+1039.592775910" Feb 24 15:06:18 crc kubenswrapper[4982]: I0224 15:06:18.005010 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8k954" podStartSLOduration=1.739383667 podStartE2EDuration="5.004992042s" podCreationTimestamp="2026-02-24 15:06:13 +0000 UTC" firstStartedPulling="2026-02-24 15:06:13.923122195 +0000 UTC m=+1035.542180688" lastFinishedPulling="2026-02-24 15:06:17.18873054 +0000 UTC m=+1038.807789063" observedRunningTime="2026-02-24 15:06:17.991412261 +0000 UTC m=+1039.610470754" watchObservedRunningTime="2026-02-24 15:06:18.004992042 +0000 UTC m=+1039.624050535" Feb 24 15:06:20 crc kubenswrapper[4982]: I0224 15:06:20.962992 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" event={"ID":"1e0416f6-ebc9-4a01-a69a-904aab8b4cbb","Type":"ContainerStarted","Data":"d24fbac744716768bd61d138bc13b4b0d3cc5e847361ad761396d1c42725c3c5"} Feb 24 15:06:20 crc kubenswrapper[4982]: I0224 15:06:20.995934 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7rt8q" podStartSLOduration=2.107545852 podStartE2EDuration="7.995899604s" podCreationTimestamp="2026-02-24 15:06:13 +0000 UTC" firstStartedPulling="2026-02-24 15:06:14.369310241 +0000 UTC m=+1035.988368734" lastFinishedPulling="2026-02-24 15:06:20.257663993 +0000 UTC m=+1041.876722486" observedRunningTime="2026-02-24 15:06:20.981727677 +0000 UTC m=+1042.600786230" watchObservedRunningTime="2026-02-24 15:06:20.995899604 +0000 UTC m=+1042.614958137" Feb 24 15:06:21 crc kubenswrapper[4982]: I0224 15:06:21.668640 4982 scope.go:117] "RemoveContainer" containerID="50b296e745f3aa0984f6b6aec06983dc6f0f77505d64261f5e23c62b014d3b02" Feb 24 15:06:23 crc kubenswrapper[4982]: I0224 15:06:23.863146 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8k954" Feb 24 15:06:24 crc kubenswrapper[4982]: I0224 15:06:24.224292 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:24 crc kubenswrapper[4982]: I0224 15:06:24.224407 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:24 crc kubenswrapper[4982]: I0224 15:06:24.232661 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:24 crc kubenswrapper[4982]: I0224 15:06:24.998991 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:06:25 crc kubenswrapper[4982]: I0224 15:06:25.082245 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c76489864-rvq9n"] Feb 24 15:06:33 crc kubenswrapper[4982]: I0224 15:06:33.825964 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vfvxq" Feb 24 15:06:38 crc kubenswrapper[4982]: I0224 15:06:38.739713 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:06:38 crc kubenswrapper[4982]: I0224 15:06:38.741205 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.145214 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-c76489864-rvq9n" podUID="e3acc9b8-3d10-46bd-9121-3c505de588b6" containerName="console" containerID="cri-o://ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846" gracePeriod=15 Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.571048 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c76489864-rvq9n_e3acc9b8-3d10-46bd-9121-3c505de588b6/console/0.log" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.571422 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c76489864-rvq9n" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.757311 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-oauth-serving-cert\") pod \"e3acc9b8-3d10-46bd-9121-3c505de588b6\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.757368 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-service-ca\") pod \"e3acc9b8-3d10-46bd-9121-3c505de588b6\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.757387 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-serving-cert\") pod \"e3acc9b8-3d10-46bd-9121-3c505de588b6\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.757446 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbh8p\" (UniqueName: \"kubernetes.io/projected/e3acc9b8-3d10-46bd-9121-3c505de588b6-kube-api-access-hbh8p\") pod \"e3acc9b8-3d10-46bd-9121-3c505de588b6\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.757519 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-oauth-config\") pod \"e3acc9b8-3d10-46bd-9121-3c505de588b6\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.757564 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-trusted-ca-bundle\") pod \"e3acc9b8-3d10-46bd-9121-3c505de588b6\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.757653 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-config\") pod \"e3acc9b8-3d10-46bd-9121-3c505de588b6\" (UID: \"e3acc9b8-3d10-46bd-9121-3c505de588b6\") " Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.758284 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-service-ca" (OuterVolumeSpecName: "service-ca") pod "e3acc9b8-3d10-46bd-9121-3c505de588b6" (UID: "e3acc9b8-3d10-46bd-9121-3c505de588b6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.758298 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-config" (OuterVolumeSpecName: "console-config") pod "e3acc9b8-3d10-46bd-9121-3c505de588b6" (UID: "e3acc9b8-3d10-46bd-9121-3c505de588b6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.758449 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e3acc9b8-3d10-46bd-9121-3c505de588b6" (UID: "e3acc9b8-3d10-46bd-9121-3c505de588b6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.759029 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e3acc9b8-3d10-46bd-9121-3c505de588b6" (UID: "e3acc9b8-3d10-46bd-9121-3c505de588b6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.763701 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3acc9b8-3d10-46bd-9121-3c505de588b6-kube-api-access-hbh8p" (OuterVolumeSpecName: "kube-api-access-hbh8p") pod "e3acc9b8-3d10-46bd-9121-3c505de588b6" (UID: "e3acc9b8-3d10-46bd-9121-3c505de588b6"). InnerVolumeSpecName "kube-api-access-hbh8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.764319 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e3acc9b8-3d10-46bd-9121-3c505de588b6" (UID: "e3acc9b8-3d10-46bd-9121-3c505de588b6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.767386 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e3acc9b8-3d10-46bd-9121-3c505de588b6" (UID: "e3acc9b8-3d10-46bd-9121-3c505de588b6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.859328 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.859365 4982 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.859374 4982 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.859383 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3acc9b8-3d10-46bd-9121-3c505de588b6-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.859392 4982 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.859400 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbh8p\" (UniqueName: \"kubernetes.io/projected/e3acc9b8-3d10-46bd-9121-3c505de588b6-kube-api-access-hbh8p\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:50 crc kubenswrapper[4982]: I0224 15:06:50.859409 4982 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e3acc9b8-3d10-46bd-9121-3c505de588b6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.233464 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c76489864-rvq9n_e3acc9b8-3d10-46bd-9121-3c505de588b6/console/0.log" Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.233591 4982 generic.go:334] "Generic (PLEG): container finished" podID="e3acc9b8-3d10-46bd-9121-3c505de588b6" containerID="ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846" exitCode=2 Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.233633 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c76489864-rvq9n" event={"ID":"e3acc9b8-3d10-46bd-9121-3c505de588b6","Type":"ContainerDied","Data":"ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846"} Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.233680 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c76489864-rvq9n" event={"ID":"e3acc9b8-3d10-46bd-9121-3c505de588b6","Type":"ContainerDied","Data":"739c80414d7ffd56967986d00e6a8d3c953a69c6ab7d67414da0932dff02b499"} Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.233683 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c76489864-rvq9n" Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.233734 4982 scope.go:117] "RemoveContainer" containerID="ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846" Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.261457 4982 scope.go:117] "RemoveContainer" containerID="ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846" Feb 24 15:06:51 crc kubenswrapper[4982]: E0224 15:06:51.262004 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846\": container with ID starting with ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846 not found: ID does not exist" containerID="ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846" Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.262074 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846"} err="failed to get container status \"ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846\": rpc error: code = NotFound desc = could not find container \"ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846\": container with ID starting with ffe8beec9110b2c21f765c50c5d136efe11109203b0404a304b4cebca95fa846 not found: ID does not exist" Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.262604 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c76489864-rvq9n"] Feb 24 15:06:51 crc kubenswrapper[4982]: I0224 15:06:51.267761 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-c76489864-rvq9n"] Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.155210 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3acc9b8-3d10-46bd-9121-3c505de588b6" path="/var/lib/kubelet/pods/e3acc9b8-3d10-46bd-9121-3c505de588b6/volumes" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.501530 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8"] Feb 24 15:06:53 crc kubenswrapper[4982]: E0224 15:06:53.501854 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3acc9b8-3d10-46bd-9121-3c505de588b6" containerName="console" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.501875 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3acc9b8-3d10-46bd-9121-3c505de588b6" containerName="console" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.502080 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3acc9b8-3d10-46bd-9121-3c505de588b6" containerName="console" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.503240 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.506732 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.522517 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8"] Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.602286 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpn2j\" (UniqueName: \"kubernetes.io/projected/f003884e-c2a7-466f-8d03-b0f7bbd2254d-kube-api-access-fpn2j\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.602415 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.602451 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.705064 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpn2j\" (UniqueName: \"kubernetes.io/projected/f003884e-c2a7-466f-8d03-b0f7bbd2254d-kube-api-access-fpn2j\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.705212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.705245 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.706022 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.706248 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.730756 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpn2j\" (UniqueName: \"kubernetes.io/projected/f003884e-c2a7-466f-8d03-b0f7bbd2254d-kube-api-access-fpn2j\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:53 crc kubenswrapper[4982]: I0224 15:06:53.851697 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:54 crc kubenswrapper[4982]: I0224 15:06:54.288243 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8"] Feb 24 15:06:55 crc kubenswrapper[4982]: I0224 15:06:55.268654 4982 generic.go:334] "Generic (PLEG): container finished" podID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerID="f5640164d202abee4bd5411ccf688497a0e59aafd8dfe223278a464340ab70de" exitCode=0 Feb 24 15:06:55 crc kubenswrapper[4982]: I0224 15:06:55.268759 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" event={"ID":"f003884e-c2a7-466f-8d03-b0f7bbd2254d","Type":"ContainerDied","Data":"f5640164d202abee4bd5411ccf688497a0e59aafd8dfe223278a464340ab70de"} Feb 24 15:06:55 crc kubenswrapper[4982]: I0224 15:06:55.269063 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" event={"ID":"f003884e-c2a7-466f-8d03-b0f7bbd2254d","Type":"ContainerStarted","Data":"71ddc744b8d1e3b91946d75ab206834a4ff114226aeaeb38aa3c410d0ae03883"} Feb 24 15:06:57 crc kubenswrapper[4982]: I0224 15:06:57.290581 4982 generic.go:334] "Generic (PLEG): container finished" podID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerID="fa12a4d082cca298fee714baf67300d02f4ac6bdd0dd9d5dd747fdd20491c0b8" exitCode=0 Feb 24 15:06:57 crc kubenswrapper[4982]: I0224 15:06:57.290694 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" event={"ID":"f003884e-c2a7-466f-8d03-b0f7bbd2254d","Type":"ContainerDied","Data":"fa12a4d082cca298fee714baf67300d02f4ac6bdd0dd9d5dd747fdd20491c0b8"} Feb 24 15:06:58 crc kubenswrapper[4982]: I0224 15:06:58.305713 4982 generic.go:334] "Generic (PLEG): container finished" podID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerID="687f28b7acf63b372e68829e96a4d644418b48862f3aa6ee0f2c0f1930bc9773" exitCode=0 Feb 24 15:06:58 crc kubenswrapper[4982]: I0224 15:06:58.305823 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" event={"ID":"f003884e-c2a7-466f-8d03-b0f7bbd2254d","Type":"ContainerDied","Data":"687f28b7acf63b372e68829e96a4d644418b48862f3aa6ee0f2c0f1930bc9773"} Feb 24 15:06:59 crc kubenswrapper[4982]: I0224 15:06:59.645738 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:06:59 crc kubenswrapper[4982]: I0224 15:06:59.818938 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-bundle\") pod \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " Feb 24 15:06:59 crc kubenswrapper[4982]: I0224 15:06:59.819410 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpn2j\" (UniqueName: \"kubernetes.io/projected/f003884e-c2a7-466f-8d03-b0f7bbd2254d-kube-api-access-fpn2j\") pod \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " Feb 24 15:06:59 crc kubenswrapper[4982]: I0224 15:06:59.819537 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-util\") pod \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\" (UID: \"f003884e-c2a7-466f-8d03-b0f7bbd2254d\") " Feb 24 15:06:59 crc kubenswrapper[4982]: I0224 15:06:59.820244 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-bundle" (OuterVolumeSpecName: "bundle") pod "f003884e-c2a7-466f-8d03-b0f7bbd2254d" (UID: "f003884e-c2a7-466f-8d03-b0f7bbd2254d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:06:59 crc kubenswrapper[4982]: I0224 15:06:59.832948 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f003884e-c2a7-466f-8d03-b0f7bbd2254d-kube-api-access-fpn2j" (OuterVolumeSpecName: "kube-api-access-fpn2j") pod "f003884e-c2a7-466f-8d03-b0f7bbd2254d" (UID: "f003884e-c2a7-466f-8d03-b0f7bbd2254d"). InnerVolumeSpecName "kube-api-access-fpn2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:06:59 crc kubenswrapper[4982]: I0224 15:06:59.921796 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:06:59 crc kubenswrapper[4982]: I0224 15:06:59.921833 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpn2j\" (UniqueName: \"kubernetes.io/projected/f003884e-c2a7-466f-8d03-b0f7bbd2254d-kube-api-access-fpn2j\") on node \"crc\" DevicePath \"\"" Feb 24 15:07:00 crc kubenswrapper[4982]: I0224 15:07:00.128906 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-util" (OuterVolumeSpecName: "util") pod "f003884e-c2a7-466f-8d03-b0f7bbd2254d" (UID: "f003884e-c2a7-466f-8d03-b0f7bbd2254d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:07:00 crc kubenswrapper[4982]: I0224 15:07:00.228598 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f003884e-c2a7-466f-8d03-b0f7bbd2254d-util\") on node \"crc\" DevicePath \"\"" Feb 24 15:07:00 crc kubenswrapper[4982]: I0224 15:07:00.340041 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" event={"ID":"f003884e-c2a7-466f-8d03-b0f7bbd2254d","Type":"ContainerDied","Data":"71ddc744b8d1e3b91946d75ab206834a4ff114226aeaeb38aa3c410d0ae03883"} Feb 24 15:07:00 crc kubenswrapper[4982]: I0224 15:07:00.340104 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71ddc744b8d1e3b91946d75ab206834a4ff114226aeaeb38aa3c410d0ae03883" Feb 24 15:07:00 crc kubenswrapper[4982]: I0224 15:07:00.340173 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.490338 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8"] Feb 24 15:07:08 crc kubenswrapper[4982]: E0224 15:07:08.491183 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerName="util" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.491198 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerName="util" Feb 24 15:07:08 crc kubenswrapper[4982]: E0224 15:07:08.491217 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerName="extract" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.491224 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerName="extract" Feb 24 15:07:08 crc kubenswrapper[4982]: E0224 15:07:08.491246 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerName="pull" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.491253 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerName="pull" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.491414 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f003884e-c2a7-466f-8d03-b0f7bbd2254d" containerName="extract" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.492028 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.495566 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.495752 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.495970 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-fm74k" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.497420 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.498858 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.563223 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8"] Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.564824 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnmmn\" (UniqueName: \"kubernetes.io/projected/bce5dbab-56c7-4132-aa32-b13ea1d81ada-kube-api-access-pnmmn\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.564943 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bce5dbab-56c7-4132-aa32-b13ea1d81ada-webhook-cert\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.564983 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bce5dbab-56c7-4132-aa32-b13ea1d81ada-apiservice-cert\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.666813 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bce5dbab-56c7-4132-aa32-b13ea1d81ada-webhook-cert\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.666889 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bce5dbab-56c7-4132-aa32-b13ea1d81ada-apiservice-cert\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.666930 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnmmn\" (UniqueName: \"kubernetes.io/projected/bce5dbab-56c7-4132-aa32-b13ea1d81ada-kube-api-access-pnmmn\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.676232 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bce5dbab-56c7-4132-aa32-b13ea1d81ada-apiservice-cert\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.676239 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bce5dbab-56c7-4132-aa32-b13ea1d81ada-webhook-cert\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.702329 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnmmn\" (UniqueName: \"kubernetes.io/projected/bce5dbab-56c7-4132-aa32-b13ea1d81ada-kube-api-access-pnmmn\") pod \"metallb-operator-controller-manager-686d7d6557-xttc8\" (UID: \"bce5dbab-56c7-4132-aa32-b13ea1d81ada\") " pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.738378 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.738426 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.809527 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.977868 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7474599d7f-769v9"] Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.978913 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.983431 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bnzwp" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.983685 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 24 15:07:08 crc kubenswrapper[4982]: I0224 15:07:08.983859 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.004849 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7474599d7f-769v9"] Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.073656 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dp7k\" (UniqueName: \"kubernetes.io/projected/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-kube-api-access-7dp7k\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.074285 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-webhook-cert\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.074385 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-apiservice-cert\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.176337 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-webhook-cert\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.176414 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-apiservice-cert\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.176469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dp7k\" (UniqueName: \"kubernetes.io/projected/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-kube-api-access-7dp7k\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.191293 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-apiservice-cert\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.191371 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-webhook-cert\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.204846 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dp7k\" (UniqueName: \"kubernetes.io/projected/eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb-kube-api-access-7dp7k\") pod \"metallb-operator-webhook-server-7474599d7f-769v9\" (UID: \"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb\") " pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.331359 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.361646 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8"] Feb 24 15:07:09 crc kubenswrapper[4982]: W0224 15:07:09.362228 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbce5dbab_56c7_4132_aa32_b13ea1d81ada.slice/crio-291c13289ca056a63309c6b442c5180a973885f75e7aa693ba099ede5a5eff7a WatchSource:0}: Error finding container 291c13289ca056a63309c6b442c5180a973885f75e7aa693ba099ede5a5eff7a: Status 404 returned error can't find the container with id 291c13289ca056a63309c6b442c5180a973885f75e7aa693ba099ede5a5eff7a Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.415702 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" event={"ID":"bce5dbab-56c7-4132-aa32-b13ea1d81ada","Type":"ContainerStarted","Data":"291c13289ca056a63309c6b442c5180a973885f75e7aa693ba099ede5a5eff7a"} Feb 24 15:07:09 crc kubenswrapper[4982]: I0224 15:07:09.809472 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7474599d7f-769v9"] Feb 24 15:07:10 crc kubenswrapper[4982]: I0224 15:07:10.423199 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" event={"ID":"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb","Type":"ContainerStarted","Data":"7f9b558b1b761681d483b550229c2a3750236da94a1a12bb48de08936f2086fc"} Feb 24 15:07:15 crc kubenswrapper[4982]: I0224 15:07:15.465488 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" event={"ID":"bce5dbab-56c7-4132-aa32-b13ea1d81ada","Type":"ContainerStarted","Data":"728942c0874c40b4544578d115bd4315150eb89e8ca66356ab429c15f55364b1"} Feb 24 15:07:15 crc kubenswrapper[4982]: I0224 15:07:15.466053 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:15 crc kubenswrapper[4982]: I0224 15:07:15.467421 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" event={"ID":"eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb","Type":"ContainerStarted","Data":"7810dd34457a8b79e4ef366c14bb9bae373f76605e361910a238cc6030bc1858"} Feb 24 15:07:15 crc kubenswrapper[4982]: I0224 15:07:15.467625 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:15 crc kubenswrapper[4982]: I0224 15:07:15.490754 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" podStartSLOduration=2.629656646 podStartE2EDuration="7.490737503s" podCreationTimestamp="2026-02-24 15:07:08 +0000 UTC" firstStartedPulling="2026-02-24 15:07:09.364480864 +0000 UTC m=+1090.983539347" lastFinishedPulling="2026-02-24 15:07:14.225561701 +0000 UTC m=+1095.844620204" observedRunningTime="2026-02-24 15:07:15.488778929 +0000 UTC m=+1097.107837452" watchObservedRunningTime="2026-02-24 15:07:15.490737503 +0000 UTC m=+1097.109795996" Feb 24 15:07:15 crc kubenswrapper[4982]: I0224 15:07:15.525068 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" podStartSLOduration=3.083581003 podStartE2EDuration="7.52505059s" podCreationTimestamp="2026-02-24 15:07:08 +0000 UTC" firstStartedPulling="2026-02-24 15:07:09.807158814 +0000 UTC m=+1091.426217307" lastFinishedPulling="2026-02-24 15:07:14.248628401 +0000 UTC m=+1095.867686894" observedRunningTime="2026-02-24 15:07:15.51623325 +0000 UTC m=+1097.135291753" watchObservedRunningTime="2026-02-24 15:07:15.52505059 +0000 UTC m=+1097.144109073" Feb 24 15:07:29 crc kubenswrapper[4982]: I0224 15:07:29.352556 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7474599d7f-769v9" Feb 24 15:07:38 crc kubenswrapper[4982]: I0224 15:07:38.738200 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:07:38 crc kubenswrapper[4982]: I0224 15:07:38.739142 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:07:38 crc kubenswrapper[4982]: I0224 15:07:38.739276 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:07:38 crc kubenswrapper[4982]: I0224 15:07:38.740691 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa114d36019ce5147a672d1a1ffc47f09215e89baaa3aa8a9c736d89a2586a36"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:07:38 crc kubenswrapper[4982]: I0224 15:07:38.740817 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://aa114d36019ce5147a672d1a1ffc47f09215e89baaa3aa8a9c736d89a2586a36" gracePeriod=600 Feb 24 15:07:39 crc kubenswrapper[4982]: I0224 15:07:39.678044 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="aa114d36019ce5147a672d1a1ffc47f09215e89baaa3aa8a9c736d89a2586a36" exitCode=0 Feb 24 15:07:39 crc kubenswrapper[4982]: I0224 15:07:39.678132 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"aa114d36019ce5147a672d1a1ffc47f09215e89baaa3aa8a9c736d89a2586a36"} Feb 24 15:07:39 crc kubenswrapper[4982]: I0224 15:07:39.678629 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"5be665899696d5c8fd21b8f8f600a79f59d38e14863a16150fbf781e7134602b"} Feb 24 15:07:39 crc kubenswrapper[4982]: I0224 15:07:39.678657 4982 scope.go:117] "RemoveContainer" containerID="07b5b5ef08503bdcb3a99da20553629509399385e0be96239d55f7f7f354eb91" Feb 24 15:07:48 crc kubenswrapper[4982]: I0224 15:07:48.813434 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-686d7d6557-xttc8" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.817473 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5rzcg"] Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.821703 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.830109 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.830123 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8"] Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.831331 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.831344 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nnq5m" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.832024 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.834616 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.855233 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8"] Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.904476 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-reloader\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.904787 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-metrics\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.904818 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-frr-sockets\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.904854 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d717783-cc63-4772-82fd-b8865e471134-metrics-certs\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.904921 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd5e975a-6b06-4ba5-a549-63843e3d9f41-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-tszx8\" (UID: \"dd5e975a-6b06-4ba5-a549-63843e3d9f41\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.904985 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qzq\" (UniqueName: \"kubernetes.io/projected/1d717783-cc63-4772-82fd-b8865e471134-kube-api-access-h8qzq\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.905026 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-frr-conf\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.905081 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbcpm\" (UniqueName: \"kubernetes.io/projected/dd5e975a-6b06-4ba5-a549-63843e3d9f41-kube-api-access-kbcpm\") pod \"frr-k8s-webhook-server-78b44bf5bb-tszx8\" (UID: \"dd5e975a-6b06-4ba5-a549-63843e3d9f41\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.905106 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1d717783-cc63-4772-82fd-b8865e471134-frr-startup\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.928039 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-z6xx7"] Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.930304 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-z6xx7" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.935591 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.935609 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.935829 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qwggx" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.935868 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.940741 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-flpt4"] Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.942295 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.944029 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 24 15:07:49 crc kubenswrapper[4982]: I0224 15:07:49.961076 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-flpt4"] Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007583 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-cert\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007670 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbcpm\" (UniqueName: \"kubernetes.io/projected/dd5e975a-6b06-4ba5-a549-63843e3d9f41-kube-api-access-kbcpm\") pod \"frr-k8s-webhook-server-78b44bf5bb-tszx8\" (UID: \"dd5e975a-6b06-4ba5-a549-63843e3d9f41\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007702 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-metrics-certs\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1d717783-cc63-4772-82fd-b8865e471134-frr-startup\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007753 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c5421b10-e070-4cdd-a7b1-060d75642b50-metallb-excludel2\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007797 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-reloader\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007825 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-metrics\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007850 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhvb4\" (UniqueName: \"kubernetes.io/projected/c5421b10-e070-4cdd-a7b1-060d75642b50-kube-api-access-hhvb4\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007877 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-frr-sockets\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007906 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007938 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d717783-cc63-4772-82fd-b8865e471134-metrics-certs\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.007976 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd5e975a-6b06-4ba5-a549-63843e3d9f41-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-tszx8\" (UID: \"dd5e975a-6b06-4ba5-a549-63843e3d9f41\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.008013 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8xv\" (UniqueName: \"kubernetes.io/projected/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-kube-api-access-6d8xv\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.008039 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qzq\" (UniqueName: \"kubernetes.io/projected/1d717783-cc63-4772-82fd-b8865e471134-kube-api-access-h8qzq\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.008066 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-metrics-certs\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.008104 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-frr-conf\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.008456 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-reloader\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.008490 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-metrics\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: E0224 15:07:50.008592 4982 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 24 15:07:50 crc kubenswrapper[4982]: E0224 15:07:50.008645 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd5e975a-6b06-4ba5-a549-63843e3d9f41-cert podName:dd5e975a-6b06-4ba5-a549-63843e3d9f41 nodeName:}" failed. No retries permitted until 2026-02-24 15:07:50.50862565 +0000 UTC m=+1132.127684133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd5e975a-6b06-4ba5-a549-63843e3d9f41-cert") pod "frr-k8s-webhook-server-78b44bf5bb-tszx8" (UID: "dd5e975a-6b06-4ba5-a549-63843e3d9f41") : secret "frr-k8s-webhook-server-cert" not found Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.009077 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-frr-sockets\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.009374 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1d717783-cc63-4772-82fd-b8865e471134-frr-startup\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.012722 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1d717783-cc63-4772-82fd-b8865e471134-frr-conf\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.015770 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d717783-cc63-4772-82fd-b8865e471134-metrics-certs\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.027843 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qzq\" (UniqueName: \"kubernetes.io/projected/1d717783-cc63-4772-82fd-b8865e471134-kube-api-access-h8qzq\") pod \"frr-k8s-5rzcg\" (UID: \"1d717783-cc63-4772-82fd-b8865e471134\") " pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.028136 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbcpm\" (UniqueName: \"kubernetes.io/projected/dd5e975a-6b06-4ba5-a549-63843e3d9f41-kube-api-access-kbcpm\") pod \"frr-k8s-webhook-server-78b44bf5bb-tszx8\" (UID: \"dd5e975a-6b06-4ba5-a549-63843e3d9f41\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.110979 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-cert\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.111270 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-metrics-certs\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.111295 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c5421b10-e070-4cdd-a7b1-060d75642b50-metallb-excludel2\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.111333 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhvb4\" (UniqueName: \"kubernetes.io/projected/c5421b10-e070-4cdd-a7b1-060d75642b50-kube-api-access-hhvb4\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.111362 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.111422 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d8xv\" (UniqueName: \"kubernetes.io/projected/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-kube-api-access-6d8xv\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.111444 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-metrics-certs\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: E0224 15:07:50.111586 4982 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 24 15:07:50 crc kubenswrapper[4982]: E0224 15:07:50.111637 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-metrics-certs podName:59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad nodeName:}" failed. No retries permitted until 2026-02-24 15:07:50.611619442 +0000 UTC m=+1132.230677935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-metrics-certs") pod "controller-69bbfbf88f-flpt4" (UID: "59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad") : secret "controller-certs-secret" not found Feb 24 15:07:50 crc kubenswrapper[4982]: E0224 15:07:50.111831 4982 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 15:07:50 crc kubenswrapper[4982]: E0224 15:07:50.111892 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist podName:c5421b10-e070-4cdd-a7b1-060d75642b50 nodeName:}" failed. No retries permitted until 2026-02-24 15:07:50.61187715 +0000 UTC m=+1132.230935643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist") pod "speaker-z6xx7" (UID: "c5421b10-e070-4cdd-a7b1-060d75642b50") : secret "metallb-memberlist" not found Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.112192 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c5421b10-e070-4cdd-a7b1-060d75642b50-metallb-excludel2\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.116120 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-metrics-certs\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.124303 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-cert\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.129762 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d8xv\" (UniqueName: \"kubernetes.io/projected/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-kube-api-access-6d8xv\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.134346 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhvb4\" (UniqueName: \"kubernetes.io/projected/c5421b10-e070-4cdd-a7b1-060d75642b50-kube-api-access-hhvb4\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.145050 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.520031 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd5e975a-6b06-4ba5-a549-63843e3d9f41-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-tszx8\" (UID: \"dd5e975a-6b06-4ba5-a549-63843e3d9f41\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.526856 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd5e975a-6b06-4ba5-a549-63843e3d9f41-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-tszx8\" (UID: \"dd5e975a-6b06-4ba5-a549-63843e3d9f41\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.621779 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.621865 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-metrics-certs\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: E0224 15:07:50.622017 4982 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 15:07:50 crc kubenswrapper[4982]: E0224 15:07:50.622108 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist podName:c5421b10-e070-4cdd-a7b1-060d75642b50 nodeName:}" failed. No retries permitted until 2026-02-24 15:07:51.622086003 +0000 UTC m=+1133.241144506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist") pod "speaker-z6xx7" (UID: "c5421b10-e070-4cdd-a7b1-060d75642b50") : secret "metallb-memberlist" not found Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.631992 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad-metrics-certs\") pod \"controller-69bbfbf88f-flpt4\" (UID: \"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad\") " pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.749117 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.786997 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerStarted","Data":"26375d228bcc91c39e68208fce5135988a8abc352cf9d85ee3fb1c8fde48e79b"} Feb 24 15:07:50 crc kubenswrapper[4982]: I0224 15:07:50.863733 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:51 crc kubenswrapper[4982]: I0224 15:07:51.284538 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8"] Feb 24 15:07:51 crc kubenswrapper[4982]: I0224 15:07:51.373451 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-flpt4"] Feb 24 15:07:51 crc kubenswrapper[4982]: W0224 15:07:51.380094 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c1e10b_2985_4ab4_b3d5_6a1ef1ba85ad.slice/crio-5fc86ce7ede655c69a96ac168149d9a4924ac1932ba4e2b9837393251336e33a WatchSource:0}: Error finding container 5fc86ce7ede655c69a96ac168149d9a4924ac1932ba4e2b9837393251336e33a: Status 404 returned error can't find the container with id 5fc86ce7ede655c69a96ac168149d9a4924ac1932ba4e2b9837393251336e33a Feb 24 15:07:51 crc kubenswrapper[4982]: I0224 15:07:51.637888 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:51 crc kubenswrapper[4982]: E0224 15:07:51.638140 4982 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 15:07:51 crc kubenswrapper[4982]: E0224 15:07:51.638237 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist podName:c5421b10-e070-4cdd-a7b1-060d75642b50 nodeName:}" failed. No retries permitted until 2026-02-24 15:07:53.638213874 +0000 UTC m=+1135.257272377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist") pod "speaker-z6xx7" (UID: "c5421b10-e070-4cdd-a7b1-060d75642b50") : secret "metallb-memberlist" not found Feb 24 15:07:51 crc kubenswrapper[4982]: I0224 15:07:51.800365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" event={"ID":"dd5e975a-6b06-4ba5-a549-63843e3d9f41","Type":"ContainerStarted","Data":"94faced32c87b2d1f7d4efa12b8b71a26d77839989ff37d34403e14ae45737af"} Feb 24 15:07:51 crc kubenswrapper[4982]: I0224 15:07:51.805918 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-flpt4" event={"ID":"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad","Type":"ContainerStarted","Data":"ef8004c4a6e2e447a05d1bfcefe0e9503099007b645940f60a69bbc3ff09ecc7"} Feb 24 15:07:51 crc kubenswrapper[4982]: I0224 15:07:51.805962 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-flpt4" event={"ID":"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad","Type":"ContainerStarted","Data":"5fc86ce7ede655c69a96ac168149d9a4924ac1932ba4e2b9837393251336e33a"} Feb 24 15:07:52 crc kubenswrapper[4982]: I0224 15:07:52.840581 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-flpt4" event={"ID":"59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad","Type":"ContainerStarted","Data":"4dc1923d73178710086bb75427c87b194f1218285c3752a4f4e93caa793dab54"} Feb 24 15:07:52 crc kubenswrapper[4982]: I0224 15:07:52.841170 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:07:52 crc kubenswrapper[4982]: I0224 15:07:52.879221 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-flpt4" podStartSLOduration=3.879200595 podStartE2EDuration="3.879200595s" podCreationTimestamp="2026-02-24 15:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:07:52.87169511 +0000 UTC m=+1134.490753603" watchObservedRunningTime="2026-02-24 15:07:52.879200595 +0000 UTC m=+1134.498259078" Feb 24 15:07:53 crc kubenswrapper[4982]: I0224 15:07:53.682463 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:53 crc kubenswrapper[4982]: I0224 15:07:53.698760 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c5421b10-e070-4cdd-a7b1-060d75642b50-memberlist\") pod \"speaker-z6xx7\" (UID: \"c5421b10-e070-4cdd-a7b1-060d75642b50\") " pod="metallb-system/speaker-z6xx7" Feb 24 15:07:53 crc kubenswrapper[4982]: I0224 15:07:53.855057 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-z6xx7" Feb 24 15:07:54 crc kubenswrapper[4982]: I0224 15:07:54.858812 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z6xx7" event={"ID":"c5421b10-e070-4cdd-a7b1-060d75642b50","Type":"ContainerStarted","Data":"86ddd9192e490ab7f448bd934a67f2c3978aff4ed07bd7b7faf19e0e775ac05b"} Feb 24 15:07:54 crc kubenswrapper[4982]: I0224 15:07:54.859208 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z6xx7" event={"ID":"c5421b10-e070-4cdd-a7b1-060d75642b50","Type":"ContainerStarted","Data":"acc9430eef3cf49d7c3ed9d0cc4c5d4a86505cd0f6eaee006545c41a7df428e7"} Feb 24 15:07:54 crc kubenswrapper[4982]: I0224 15:07:54.859220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-z6xx7" event={"ID":"c5421b10-e070-4cdd-a7b1-060d75642b50","Type":"ContainerStarted","Data":"fa9ba997228fe9c29a8358740b777f14dd95dbec3298ead897154a1a11a1901c"} Feb 24 15:07:54 crc kubenswrapper[4982]: I0224 15:07:54.859356 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-z6xx7" Feb 24 15:07:54 crc kubenswrapper[4982]: I0224 15:07:54.878858 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-z6xx7" podStartSLOduration=5.878839545 podStartE2EDuration="5.878839545s" podCreationTimestamp="2026-02-24 15:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:07:54.874821555 +0000 UTC m=+1136.493880048" watchObservedRunningTime="2026-02-24 15:07:54.878839545 +0000 UTC m=+1136.497898038" Feb 24 15:07:58 crc kubenswrapper[4982]: I0224 15:07:58.900322 4982 generic.go:334] "Generic (PLEG): container finished" podID="1d717783-cc63-4772-82fd-b8865e471134" containerID="b58622d371c1713f9e745e94a3ff63d662384eb17a390ec3ea375982cad51c4c" exitCode=0 Feb 24 15:07:58 crc kubenswrapper[4982]: I0224 15:07:58.900373 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerDied","Data":"b58622d371c1713f9e745e94a3ff63d662384eb17a390ec3ea375982cad51c4c"} Feb 24 15:07:58 crc kubenswrapper[4982]: I0224 15:07:58.903648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" event={"ID":"dd5e975a-6b06-4ba5-a549-63843e3d9f41","Type":"ContainerStarted","Data":"f487081fac450c1881f1b4c1d12a7f56710f0fa43d6c5392857bc5b781937c14"} Feb 24 15:07:58 crc kubenswrapper[4982]: I0224 15:07:58.903825 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:07:58 crc kubenswrapper[4982]: I0224 15:07:58.974841 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" podStartSLOduration=2.858265933 podStartE2EDuration="9.974821478s" podCreationTimestamp="2026-02-24 15:07:49 +0000 UTC" firstStartedPulling="2026-02-24 15:07:51.299720039 +0000 UTC m=+1132.918778532" lastFinishedPulling="2026-02-24 15:07:58.416275574 +0000 UTC m=+1140.035334077" observedRunningTime="2026-02-24 15:07:58.973069449 +0000 UTC m=+1140.592127982" watchObservedRunningTime="2026-02-24 15:07:58.974821478 +0000 UTC m=+1140.593879991" Feb 24 15:07:59 crc kubenswrapper[4982]: I0224 15:07:59.917812 4982 generic.go:334] "Generic (PLEG): container finished" podID="1d717783-cc63-4772-82fd-b8865e471134" containerID="22721be0567a48a0a7d43afdf563f55452dd420edb192d53a63ea1c983eaddd7" exitCode=0 Feb 24 15:07:59 crc kubenswrapper[4982]: I0224 15:07:59.917989 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerDied","Data":"22721be0567a48a0a7d43afdf563f55452dd420edb192d53a63ea1c983eaddd7"} Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.139375 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532428-wdr4v"] Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.143881 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.147318 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.147704 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.148630 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.151726 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532428-wdr4v"] Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.296342 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xbd6\" (UniqueName: \"kubernetes.io/projected/370d1fb7-83d9-4153-ac92-24d499c15b76-kube-api-access-9xbd6\") pod \"auto-csr-approver-29532428-wdr4v\" (UID: \"370d1fb7-83d9-4153-ac92-24d499c15b76\") " pod="openshift-infra/auto-csr-approver-29532428-wdr4v" Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.398309 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xbd6\" (UniqueName: \"kubernetes.io/projected/370d1fb7-83d9-4153-ac92-24d499c15b76-kube-api-access-9xbd6\") pod \"auto-csr-approver-29532428-wdr4v\" (UID: \"370d1fb7-83d9-4153-ac92-24d499c15b76\") " pod="openshift-infra/auto-csr-approver-29532428-wdr4v" Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.417929 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xbd6\" (UniqueName: \"kubernetes.io/projected/370d1fb7-83d9-4153-ac92-24d499c15b76-kube-api-access-9xbd6\") pod \"auto-csr-approver-29532428-wdr4v\" (UID: \"370d1fb7-83d9-4153-ac92-24d499c15b76\") " pod="openshift-infra/auto-csr-approver-29532428-wdr4v" Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.472627 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.912395 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532428-wdr4v"] Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.932940 4982 generic.go:334] "Generic (PLEG): container finished" podID="1d717783-cc63-4772-82fd-b8865e471134" containerID="46aaf7df9b03e47a27ea34c9db9a8fa40b11f0b9b2fe46f45e97dc52acfaf64a" exitCode=0 Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.933067 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerDied","Data":"46aaf7df9b03e47a27ea34c9db9a8fa40b11f0b9b2fe46f45e97dc52acfaf64a"} Feb 24 15:08:00 crc kubenswrapper[4982]: I0224 15:08:00.936930 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" event={"ID":"370d1fb7-83d9-4153-ac92-24d499c15b76","Type":"ContainerStarted","Data":"d4ee8ac4b6193fb81da95d5684abdb8bac1a02d4ff5550f4ffcd4b0a0f174cfd"} Feb 24 15:08:01 crc kubenswrapper[4982]: I0224 15:08:01.954638 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerStarted","Data":"0db066f057b0d3ebc134b27de582079fbc5b10213c0601052720a8920e8c2cea"} Feb 24 15:08:01 crc kubenswrapper[4982]: I0224 15:08:01.955115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerStarted","Data":"7aa8c85c0dcc7f4f49e8dc6b40d1c5870954741b7398a3183bec12cd082dbc5e"} Feb 24 15:08:01 crc kubenswrapper[4982]: I0224 15:08:01.955127 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerStarted","Data":"d4a4d8876335da048c93cc28871ec6fb5c2e507f156b55df3fb0f8cb9578bf15"} Feb 24 15:08:01 crc kubenswrapper[4982]: I0224 15:08:01.955137 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerStarted","Data":"959cbb633c8f435fa4558e709c39c29cd7d601f62622f8dd6e768d2572f19b45"} Feb 24 15:08:01 crc kubenswrapper[4982]: I0224 15:08:01.955147 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerStarted","Data":"229a0f2e33eee15118d6d55f7a1f1200e9d0fc7cf27ca74b3af448204ed26f48"} Feb 24 15:08:02 crc kubenswrapper[4982]: I0224 15:08:02.965352 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" event={"ID":"370d1fb7-83d9-4153-ac92-24d499c15b76","Type":"ContainerStarted","Data":"4bf39451b8e665d55680a2330bebe2181801a7087f1de256d131d30863c01f6a"} Feb 24 15:08:02 crc kubenswrapper[4982]: I0224 15:08:02.971180 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5rzcg" event={"ID":"1d717783-cc63-4772-82fd-b8865e471134","Type":"ContainerStarted","Data":"4c97a46cbc3ce5283d82350191a98b6800068df0da96687d2074806adf6e2bfe"} Feb 24 15:08:02 crc kubenswrapper[4982]: I0224 15:08:02.971562 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:08:02 crc kubenswrapper[4982]: I0224 15:08:02.989695 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" podStartSLOduration=1.963695204 podStartE2EDuration="2.989664973s" podCreationTimestamp="2026-02-24 15:08:00 +0000 UTC" firstStartedPulling="2026-02-24 15:08:00.915863408 +0000 UTC m=+1142.534921931" lastFinishedPulling="2026-02-24 15:08:01.941833197 +0000 UTC m=+1143.560891700" observedRunningTime="2026-02-24 15:08:02.979538667 +0000 UTC m=+1144.598597160" watchObservedRunningTime="2026-02-24 15:08:02.989664973 +0000 UTC m=+1144.608723506" Feb 24 15:08:03 crc kubenswrapper[4982]: I0224 15:08:03.016782 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5rzcg" podStartSLOduration=5.921745705 podStartE2EDuration="14.016751152s" podCreationTimestamp="2026-02-24 15:07:49 +0000 UTC" firstStartedPulling="2026-02-24 15:07:50.303552493 +0000 UTC m=+1131.922610986" lastFinishedPulling="2026-02-24 15:07:58.39855794 +0000 UTC m=+1140.017616433" observedRunningTime="2026-02-24 15:08:03.006559894 +0000 UTC m=+1144.625618407" watchObservedRunningTime="2026-02-24 15:08:03.016751152 +0000 UTC m=+1144.635809655" Feb 24 15:08:03 crc kubenswrapper[4982]: I0224 15:08:03.984423 4982 generic.go:334] "Generic (PLEG): container finished" podID="370d1fb7-83d9-4153-ac92-24d499c15b76" containerID="4bf39451b8e665d55680a2330bebe2181801a7087f1de256d131d30863c01f6a" exitCode=0 Feb 24 15:08:03 crc kubenswrapper[4982]: I0224 15:08:03.984582 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" event={"ID":"370d1fb7-83d9-4153-ac92-24d499c15b76","Type":"ContainerDied","Data":"4bf39451b8e665d55680a2330bebe2181801a7087f1de256d131d30863c01f6a"} Feb 24 15:08:05 crc kubenswrapper[4982]: I0224 15:08:05.159848 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:08:05 crc kubenswrapper[4982]: I0224 15:08:05.189300 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:08:05 crc kubenswrapper[4982]: I0224 15:08:05.396826 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" Feb 24 15:08:05 crc kubenswrapper[4982]: I0224 15:08:05.487756 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xbd6\" (UniqueName: \"kubernetes.io/projected/370d1fb7-83d9-4153-ac92-24d499c15b76-kube-api-access-9xbd6\") pod \"370d1fb7-83d9-4153-ac92-24d499c15b76\" (UID: \"370d1fb7-83d9-4153-ac92-24d499c15b76\") " Feb 24 15:08:05 crc kubenswrapper[4982]: I0224 15:08:05.493835 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370d1fb7-83d9-4153-ac92-24d499c15b76-kube-api-access-9xbd6" (OuterVolumeSpecName: "kube-api-access-9xbd6") pod "370d1fb7-83d9-4153-ac92-24d499c15b76" (UID: "370d1fb7-83d9-4153-ac92-24d499c15b76"). InnerVolumeSpecName "kube-api-access-9xbd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:08:05 crc kubenswrapper[4982]: I0224 15:08:05.589585 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xbd6\" (UniqueName: \"kubernetes.io/projected/370d1fb7-83d9-4153-ac92-24d499c15b76-kube-api-access-9xbd6\") on node \"crc\" DevicePath \"\"" Feb 24 15:08:06 crc kubenswrapper[4982]: I0224 15:08:06.021408 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" Feb 24 15:08:06 crc kubenswrapper[4982]: I0224 15:08:06.021402 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532428-wdr4v" event={"ID":"370d1fb7-83d9-4153-ac92-24d499c15b76","Type":"ContainerDied","Data":"d4ee8ac4b6193fb81da95d5684abdb8bac1a02d4ff5550f4ffcd4b0a0f174cfd"} Feb 24 15:08:06 crc kubenswrapper[4982]: I0224 15:08:06.021879 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4ee8ac4b6193fb81da95d5684abdb8bac1a02d4ff5550f4ffcd4b0a0f174cfd" Feb 24 15:08:06 crc kubenswrapper[4982]: I0224 15:08:06.031510 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532422-znpkl"] Feb 24 15:08:06 crc kubenswrapper[4982]: I0224 15:08:06.037447 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532422-znpkl"] Feb 24 15:08:07 crc kubenswrapper[4982]: I0224 15:08:07.160311 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18680cf-a054-40f5-af8d-54eec7b94616" path="/var/lib/kubelet/pods/f18680cf-a054-40f5-af8d-54eec7b94616/volumes" Feb 24 15:08:10 crc kubenswrapper[4982]: I0224 15:08:10.754798 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-tszx8" Feb 24 15:08:10 crc kubenswrapper[4982]: I0224 15:08:10.869004 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-flpt4" Feb 24 15:08:13 crc kubenswrapper[4982]: I0224 15:08:13.861151 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-z6xx7" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.080490 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-m7xxk"] Feb 24 15:08:17 crc kubenswrapper[4982]: E0224 15:08:17.081562 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370d1fb7-83d9-4153-ac92-24d499c15b76" containerName="oc" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.081964 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="370d1fb7-83d9-4153-ac92-24d499c15b76" containerName="oc" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.082387 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="370d1fb7-83d9-4153-ac92-24d499c15b76" containerName="oc" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.083781 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m7xxk" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.087798 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-sh9ct" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.088293 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.088536 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.104434 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lw8g\" (UniqueName: \"kubernetes.io/projected/3d32f2d1-8666-48b8-a252-27651272fc97-kube-api-access-5lw8g\") pod \"openstack-operator-index-m7xxk\" (UID: \"3d32f2d1-8666-48b8-a252-27651272fc97\") " pod="openstack-operators/openstack-operator-index-m7xxk" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.133058 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m7xxk"] Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.206269 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lw8g\" (UniqueName: \"kubernetes.io/projected/3d32f2d1-8666-48b8-a252-27651272fc97-kube-api-access-5lw8g\") pod \"openstack-operator-index-m7xxk\" (UID: \"3d32f2d1-8666-48b8-a252-27651272fc97\") " pod="openstack-operators/openstack-operator-index-m7xxk" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.235089 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lw8g\" (UniqueName: \"kubernetes.io/projected/3d32f2d1-8666-48b8-a252-27651272fc97-kube-api-access-5lw8g\") pod \"openstack-operator-index-m7xxk\" (UID: \"3d32f2d1-8666-48b8-a252-27651272fc97\") " pod="openstack-operators/openstack-operator-index-m7xxk" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.419292 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m7xxk" Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.876603 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m7xxk"] Feb 24 15:08:17 crc kubenswrapper[4982]: W0224 15:08:17.881904 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d32f2d1_8666_48b8_a252_27651272fc97.slice/crio-1df649457ba259a8d0c3ede2c5a23822f2b02956aebe6fa6bc146864a1c3f67c WatchSource:0}: Error finding container 1df649457ba259a8d0c3ede2c5a23822f2b02956aebe6fa6bc146864a1c3f67c: Status 404 returned error can't find the container with id 1df649457ba259a8d0c3ede2c5a23822f2b02956aebe6fa6bc146864a1c3f67c Feb 24 15:08:17 crc kubenswrapper[4982]: I0224 15:08:17.884446 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:08:18 crc kubenswrapper[4982]: I0224 15:08:18.141520 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m7xxk" event={"ID":"3d32f2d1-8666-48b8-a252-27651272fc97","Type":"ContainerStarted","Data":"1df649457ba259a8d0c3ede2c5a23822f2b02956aebe6fa6bc146864a1c3f67c"} Feb 24 15:08:20 crc kubenswrapper[4982]: I0224 15:08:20.148529 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5rzcg" Feb 24 15:08:20 crc kubenswrapper[4982]: I0224 15:08:20.433536 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-m7xxk"] Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.047533 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-t7nqk"] Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.049063 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.055773 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t7nqk"] Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.075689 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vb8\" (UniqueName: \"kubernetes.io/projected/8c774d84-399c-417b-a926-981419902625-kube-api-access-88vb8\") pod \"openstack-operator-index-t7nqk\" (UID: \"8c774d84-399c-417b-a926-981419902625\") " pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.167798 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m7xxk" event={"ID":"3d32f2d1-8666-48b8-a252-27651272fc97","Type":"ContainerStarted","Data":"d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42"} Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.167877 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-m7xxk" podUID="3d32f2d1-8666-48b8-a252-27651272fc97" containerName="registry-server" containerID="cri-o://d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42" gracePeriod=2 Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.182863 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vb8\" (UniqueName: \"kubernetes.io/projected/8c774d84-399c-417b-a926-981419902625-kube-api-access-88vb8\") pod \"openstack-operator-index-t7nqk\" (UID: \"8c774d84-399c-417b-a926-981419902625\") " pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.187561 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-m7xxk" podStartSLOduration=1.109977819 podStartE2EDuration="4.187542378s" podCreationTimestamp="2026-02-24 15:08:17 +0000 UTC" firstStartedPulling="2026-02-24 15:08:17.884169673 +0000 UTC m=+1159.503228166" lastFinishedPulling="2026-02-24 15:08:20.961734232 +0000 UTC m=+1162.580792725" observedRunningTime="2026-02-24 15:08:21.180787854 +0000 UTC m=+1162.799846347" watchObservedRunningTime="2026-02-24 15:08:21.187542378 +0000 UTC m=+1162.806600871" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.205810 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vb8\" (UniqueName: \"kubernetes.io/projected/8c774d84-399c-417b-a926-981419902625-kube-api-access-88vb8\") pod \"openstack-operator-index-t7nqk\" (UID: \"8c774d84-399c-417b-a926-981419902625\") " pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.373383 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.574383 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m7xxk" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.691399 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lw8g\" (UniqueName: \"kubernetes.io/projected/3d32f2d1-8666-48b8-a252-27651272fc97-kube-api-access-5lw8g\") pod \"3d32f2d1-8666-48b8-a252-27651272fc97\" (UID: \"3d32f2d1-8666-48b8-a252-27651272fc97\") " Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.696161 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d32f2d1-8666-48b8-a252-27651272fc97-kube-api-access-5lw8g" (OuterVolumeSpecName: "kube-api-access-5lw8g") pod "3d32f2d1-8666-48b8-a252-27651272fc97" (UID: "3d32f2d1-8666-48b8-a252-27651272fc97"). InnerVolumeSpecName "kube-api-access-5lw8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.793162 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lw8g\" (UniqueName: \"kubernetes.io/projected/3d32f2d1-8666-48b8-a252-27651272fc97-kube-api-access-5lw8g\") on node \"crc\" DevicePath \"\"" Feb 24 15:08:21 crc kubenswrapper[4982]: I0224 15:08:21.834073 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t7nqk"] Feb 24 15:08:21 crc kubenswrapper[4982]: W0224 15:08:21.836693 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c774d84_399c_417b_a926_981419902625.slice/crio-0ef5317e22dd46a0b447592c1916ce8664fb8e174cc6ee6a04ee879c5ae433a3 WatchSource:0}: Error finding container 0ef5317e22dd46a0b447592c1916ce8664fb8e174cc6ee6a04ee879c5ae433a3: Status 404 returned error can't find the container with id 0ef5317e22dd46a0b447592c1916ce8664fb8e174cc6ee6a04ee879c5ae433a3 Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.179249 4982 generic.go:334] "Generic (PLEG): container finished" podID="3d32f2d1-8666-48b8-a252-27651272fc97" containerID="d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42" exitCode=0 Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.179306 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m7xxk" event={"ID":"3d32f2d1-8666-48b8-a252-27651272fc97","Type":"ContainerDied","Data":"d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42"} Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.179331 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m7xxk" Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.179643 4982 scope.go:117] "RemoveContainer" containerID="d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42" Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.179626 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m7xxk" event={"ID":"3d32f2d1-8666-48b8-a252-27651272fc97","Type":"ContainerDied","Data":"1df649457ba259a8d0c3ede2c5a23822f2b02956aebe6fa6bc146864a1c3f67c"} Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.181892 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t7nqk" event={"ID":"8c774d84-399c-417b-a926-981419902625","Type":"ContainerStarted","Data":"bf4012d93301f7bd2998ef826018d89e5e39fda19f73b1a0e16d1e506bcf35c5"} Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.181925 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t7nqk" event={"ID":"8c774d84-399c-417b-a926-981419902625","Type":"ContainerStarted","Data":"0ef5317e22dd46a0b447592c1916ce8664fb8e174cc6ee6a04ee879c5ae433a3"} Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.203191 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-t7nqk" podStartSLOduration=1.149198322 podStartE2EDuration="1.203165876s" podCreationTimestamp="2026-02-24 15:08:21 +0000 UTC" firstStartedPulling="2026-02-24 15:08:21.840762778 +0000 UTC m=+1163.459821271" lastFinishedPulling="2026-02-24 15:08:21.894730332 +0000 UTC m=+1163.513788825" observedRunningTime="2026-02-24 15:08:22.200099591 +0000 UTC m=+1163.819158354" watchObservedRunningTime="2026-02-24 15:08:22.203165876 +0000 UTC m=+1163.822224389" Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.205410 4982 scope.go:117] "RemoveContainer" containerID="d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42" Feb 24 15:08:22 crc kubenswrapper[4982]: E0224 15:08:22.208955 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42\": container with ID starting with d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42 not found: ID does not exist" containerID="d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42" Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.209007 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42"} err="failed to get container status \"d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42\": rpc error: code = NotFound desc = could not find container \"d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42\": container with ID starting with d8616d4b6a15747cc469226117fe44b39206b873680b33507951d695b0fcbd42 not found: ID does not exist" Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.225209 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-m7xxk"] Feb 24 15:08:22 crc kubenswrapper[4982]: I0224 15:08:22.232853 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-m7xxk"] Feb 24 15:08:22 crc kubenswrapper[4982]: E0224 15:08:22.359740 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d32f2d1_8666_48b8_a252_27651272fc97.slice\": RecentStats: unable to find data in memory cache]" Feb 24 15:08:23 crc kubenswrapper[4982]: I0224 15:08:23.157856 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d32f2d1-8666-48b8-a252-27651272fc97" path="/var/lib/kubelet/pods/3d32f2d1-8666-48b8-a252-27651272fc97/volumes" Feb 24 15:08:31 crc kubenswrapper[4982]: I0224 15:08:31.374170 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:31 crc kubenswrapper[4982]: I0224 15:08:31.374591 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:31 crc kubenswrapper[4982]: I0224 15:08:31.421799 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:32 crc kubenswrapper[4982]: I0224 15:08:32.330437 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-t7nqk" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.081117 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv"] Feb 24 15:08:40 crc kubenswrapper[4982]: E0224 15:08:40.082052 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d32f2d1-8666-48b8-a252-27651272fc97" containerName="registry-server" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.082068 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d32f2d1-8666-48b8-a252-27651272fc97" containerName="registry-server" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.082250 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d32f2d1-8666-48b8-a252-27651272fc97" containerName="registry-server" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.083812 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.086309 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tmlj9" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.090167 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv"] Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.126456 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-util\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.126519 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-bundle\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.126706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5p6m\" (UniqueName: \"kubernetes.io/projected/c2092b9c-4328-4fe0-976e-787c1f38d7bb-kube-api-access-p5p6m\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.228336 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5p6m\" (UniqueName: \"kubernetes.io/projected/c2092b9c-4328-4fe0-976e-787c1f38d7bb-kube-api-access-p5p6m\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.228441 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-util\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.228469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-bundle\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.229439 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-bundle\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.229805 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-util\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.249854 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5p6m\" (UniqueName: \"kubernetes.io/projected/c2092b9c-4328-4fe0-976e-787c1f38d7bb-kube-api-access-p5p6m\") pod \"0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.406887 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:40 crc kubenswrapper[4982]: I0224 15:08:40.914182 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv"] Feb 24 15:08:41 crc kubenswrapper[4982]: E0224 15:08:41.274941 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2092b9c_4328_4fe0_976e_787c1f38d7bb.slice/crio-conmon-0228fee3cfe8f1df853d0e821476bca67456bd125d4816f075f6642995577d3b.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:08:41 crc kubenswrapper[4982]: I0224 15:08:41.369066 4982 generic.go:334] "Generic (PLEG): container finished" podID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerID="0228fee3cfe8f1df853d0e821476bca67456bd125d4816f075f6642995577d3b" exitCode=0 Feb 24 15:08:41 crc kubenswrapper[4982]: I0224 15:08:41.369148 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" event={"ID":"c2092b9c-4328-4fe0-976e-787c1f38d7bb","Type":"ContainerDied","Data":"0228fee3cfe8f1df853d0e821476bca67456bd125d4816f075f6642995577d3b"} Feb 24 15:08:41 crc kubenswrapper[4982]: I0224 15:08:41.369384 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" event={"ID":"c2092b9c-4328-4fe0-976e-787c1f38d7bb","Type":"ContainerStarted","Data":"a9c59a6aae6d1013a2a8cd3e14bcb61d6cbea940541c58aca6e06faf9ec08789"} Feb 24 15:08:42 crc kubenswrapper[4982]: I0224 15:08:42.381168 4982 generic.go:334] "Generic (PLEG): container finished" podID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerID="c9f6a088a00b6c8e4980ec8ff3a3511215efab631c726d69aba2acfb79fbe06e" exitCode=0 Feb 24 15:08:42 crc kubenswrapper[4982]: I0224 15:08:42.381254 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" event={"ID":"c2092b9c-4328-4fe0-976e-787c1f38d7bb","Type":"ContainerDied","Data":"c9f6a088a00b6c8e4980ec8ff3a3511215efab631c726d69aba2acfb79fbe06e"} Feb 24 15:08:43 crc kubenswrapper[4982]: I0224 15:08:43.396654 4982 generic.go:334] "Generic (PLEG): container finished" podID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerID="ed304ab5afc69e03ef29a3b43496240721848bf70cd699722ebac981121ebf18" exitCode=0 Feb 24 15:08:43 crc kubenswrapper[4982]: I0224 15:08:43.396781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" event={"ID":"c2092b9c-4328-4fe0-976e-787c1f38d7bb","Type":"ContainerDied","Data":"ed304ab5afc69e03ef29a3b43496240721848bf70cd699722ebac981121ebf18"} Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.759316 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.867277 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-bundle\") pod \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.867360 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-util\") pod \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.867399 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5p6m\" (UniqueName: \"kubernetes.io/projected/c2092b9c-4328-4fe0-976e-787c1f38d7bb-kube-api-access-p5p6m\") pod \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\" (UID: \"c2092b9c-4328-4fe0-976e-787c1f38d7bb\") " Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.868681 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-bundle" (OuterVolumeSpecName: "bundle") pod "c2092b9c-4328-4fe0-976e-787c1f38d7bb" (UID: "c2092b9c-4328-4fe0-976e-787c1f38d7bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.873092 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2092b9c-4328-4fe0-976e-787c1f38d7bb-kube-api-access-p5p6m" (OuterVolumeSpecName: "kube-api-access-p5p6m") pod "c2092b9c-4328-4fe0-976e-787c1f38d7bb" (UID: "c2092b9c-4328-4fe0-976e-787c1f38d7bb"). InnerVolumeSpecName "kube-api-access-p5p6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.880667 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-util" (OuterVolumeSpecName: "util") pod "c2092b9c-4328-4fe0-976e-787c1f38d7bb" (UID: "c2092b9c-4328-4fe0-976e-787c1f38d7bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.968653 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.968698 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2092b9c-4328-4fe0-976e-787c1f38d7bb-util\") on node \"crc\" DevicePath \"\"" Feb 24 15:08:44 crc kubenswrapper[4982]: I0224 15:08:44.968714 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5p6m\" (UniqueName: \"kubernetes.io/projected/c2092b9c-4328-4fe0-976e-787c1f38d7bb-kube-api-access-p5p6m\") on node \"crc\" DevicePath \"\"" Feb 24 15:08:45 crc kubenswrapper[4982]: I0224 15:08:45.419807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" event={"ID":"c2092b9c-4328-4fe0-976e-787c1f38d7bb","Type":"ContainerDied","Data":"a9c59a6aae6d1013a2a8cd3e14bcb61d6cbea940541c58aca6e06faf9ec08789"} Feb 24 15:08:45 crc kubenswrapper[4982]: I0224 15:08:45.420191 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9c59a6aae6d1013a2a8cd3e14bcb61d6cbea940541c58aca6e06faf9ec08789" Feb 24 15:08:45 crc kubenswrapper[4982]: I0224 15:08:45.419879 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:57.640602 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d897576-j27pk"] Feb 24 15:08:58 crc kubenswrapper[4982]: E0224 15:08:57.641548 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerName="extract" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:57.641563 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerName="extract" Feb 24 15:08:58 crc kubenswrapper[4982]: E0224 15:08:57.641580 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerName="pull" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:57.641588 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerName="pull" Feb 24 15:08:58 crc kubenswrapper[4982]: E0224 15:08:57.641622 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerName="util" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:57.641631 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerName="util" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:57.641810 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2092b9c-4328-4fe0-976e-787c1f38d7bb" containerName="extract" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:57.642422 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:57.648455 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-ccv5p" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:57.680952 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d897576-j27pk"] Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:58.167125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n984\" (UniqueName: \"kubernetes.io/projected/26238893-6a15-42a9-ae76-3c1c9aa798ee-kube-api-access-6n984\") pod \"openstack-operator-controller-init-f8d897576-j27pk\" (UID: \"26238893-6a15-42a9-ae76-3c1c9aa798ee\") " pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:58.465724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n984\" (UniqueName: \"kubernetes.io/projected/26238893-6a15-42a9-ae76-3c1c9aa798ee-kube-api-access-6n984\") pod \"openstack-operator-controller-init-f8d897576-j27pk\" (UID: \"26238893-6a15-42a9-ae76-3c1c9aa798ee\") " pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:58.517998 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n984\" (UniqueName: \"kubernetes.io/projected/26238893-6a15-42a9-ae76-3c1c9aa798ee-kube-api-access-6n984\") pod \"openstack-operator-controller-init-f8d897576-j27pk\" (UID: \"26238893-6a15-42a9-ae76-3c1c9aa798ee\") " pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" Feb 24 15:08:58 crc kubenswrapper[4982]: I0224 15:08:58.564552 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" Feb 24 15:08:59 crc kubenswrapper[4982]: I0224 15:08:59.012369 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d897576-j27pk"] Feb 24 15:08:59 crc kubenswrapper[4982]: I0224 15:08:59.555233 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" event={"ID":"26238893-6a15-42a9-ae76-3c1c9aa798ee","Type":"ContainerStarted","Data":"37265a2a2649aee4386eac86756b4fa094fc80c8646b8f1ee1669d9ee28d7353"} Feb 24 15:09:02 crc kubenswrapper[4982]: I0224 15:09:02.585544 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" event={"ID":"26238893-6a15-42a9-ae76-3c1c9aa798ee","Type":"ContainerStarted","Data":"3affc01d6c203211957ff77dc0875e8af09d24499fa792f2230cce91b5796741"} Feb 24 15:09:02 crc kubenswrapper[4982]: I0224 15:09:02.586243 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" Feb 24 15:09:02 crc kubenswrapper[4982]: I0224 15:09:02.637454 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" podStartSLOduration=2.283308238 podStartE2EDuration="5.63742778s" podCreationTimestamp="2026-02-24 15:08:57 +0000 UTC" firstStartedPulling="2026-02-24 15:08:59.016752509 +0000 UTC m=+1200.635811002" lastFinishedPulling="2026-02-24 15:09:02.370872041 +0000 UTC m=+1203.989930544" observedRunningTime="2026-02-24 15:09:02.630653425 +0000 UTC m=+1204.249711968" watchObservedRunningTime="2026-02-24 15:09:02.63742778 +0000 UTC m=+1204.256486303" Feb 24 15:09:08 crc kubenswrapper[4982]: I0224 15:09:08.569052 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f8d897576-j27pk" Feb 24 15:09:21 crc kubenswrapper[4982]: I0224 15:09:21.810329 4982 scope.go:117] "RemoveContainer" containerID="f285a4dff2586b06d32f60b93dde901764d4e4cfc073c0c8d7b45c5826a87e5a" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.796547 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9"] Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.798164 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.801011 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-58zgh" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.821634 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8"] Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.822826 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.829339 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zg4rm" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.832888 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9"] Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.843161 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8"] Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.852836 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v"] Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.854492 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.864630 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-q945p" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.866250 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v"] Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.891564 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx"] Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.892543 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.917549 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6jcx8" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.993132 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx"] Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.993798 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhzhp\" (UniqueName: \"kubernetes.io/projected/5602df8b-a253-42b4-8b7d-93a3a793fa2a-kube-api-access-nhzhp\") pod \"cinder-operator-controller-manager-55d77d7b5c-zmxj9\" (UID: \"5602df8b-a253-42b4-8b7d-93a3a793fa2a\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.993881 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlspp\" (UniqueName: \"kubernetes.io/projected/b0f19215-2346-4a5a-8b4a-30f19af5db6c-kube-api-access-nlspp\") pod \"designate-operator-controller-manager-6d8bf5c495-rqq7v\" (UID: \"b0f19215-2346-4a5a-8b4a-30f19af5db6c\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" Feb 24 15:09:39 crc kubenswrapper[4982]: I0224 15:09:39.994080 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vnv\" (UniqueName: \"kubernetes.io/projected/b5bb21b9-5878-4960-9bd6-f46b48419f59-kube-api-access-f5vnv\") pod \"barbican-operator-controller-manager-868647ff47-xdgp8\" (UID: \"b5bb21b9-5878-4960-9bd6-f46b48419f59\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.030022 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.031418 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.034979 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-q2qsx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.058605 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pcpml"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.059685 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.065152 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.065436 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-d4gk6" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.072094 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.073162 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.079649 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kdhxs" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.086640 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.088006 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.094191 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dhf6d" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.096788 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlspp\" (UniqueName: \"kubernetes.io/projected/b0f19215-2346-4a5a-8b4a-30f19af5db6c-kube-api-access-nlspp\") pod \"designate-operator-controller-manager-6d8bf5c495-rqq7v\" (UID: \"b0f19215-2346-4a5a-8b4a-30f19af5db6c\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.097040 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7ktx\" (UniqueName: \"kubernetes.io/projected/e07053fe-da4d-437b-b884-659d18acc903-kube-api-access-j7ktx\") pod \"glance-operator-controller-manager-784b5bb6c5-8dxfx\" (UID: \"e07053fe-da4d-437b-b884-659d18acc903\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.097096 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5vnv\" (UniqueName: \"kubernetes.io/projected/b5bb21b9-5878-4960-9bd6-f46b48419f59-kube-api-access-f5vnv\") pod \"barbican-operator-controller-manager-868647ff47-xdgp8\" (UID: \"b5bb21b9-5878-4960-9bd6-f46b48419f59\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.097225 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhzhp\" (UniqueName: \"kubernetes.io/projected/5602df8b-a253-42b4-8b7d-93a3a793fa2a-kube-api-access-nhzhp\") pod \"cinder-operator-controller-manager-55d77d7b5c-zmxj9\" (UID: \"5602df8b-a253-42b4-8b7d-93a3a793fa2a\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.125913 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.137395 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5vnv\" (UniqueName: \"kubernetes.io/projected/b5bb21b9-5878-4960-9bd6-f46b48419f59-kube-api-access-f5vnv\") pod \"barbican-operator-controller-manager-868647ff47-xdgp8\" (UID: \"b5bb21b9-5878-4960-9bd6-f46b48419f59\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.142400 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.142068 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhzhp\" (UniqueName: \"kubernetes.io/projected/5602df8b-a253-42b4-8b7d-93a3a793fa2a-kube-api-access-nhzhp\") pod \"cinder-operator-controller-manager-55d77d7b5c-zmxj9\" (UID: \"5602df8b-a253-42b4-8b7d-93a3a793fa2a\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.155296 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pcpml"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.159596 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlspp\" (UniqueName: \"kubernetes.io/projected/b0f19215-2346-4a5a-8b4a-30f19af5db6c-kube-api-access-nlspp\") pod \"designate-operator-controller-manager-6d8bf5c495-rqq7v\" (UID: \"b0f19215-2346-4a5a-8b4a-30f19af5db6c\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.177122 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.193644 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.220180 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trpmx\" (UniqueName: \"kubernetes.io/projected/5da22390-3c90-4096-8f6b-ac0f8feb4f46-kube-api-access-trpmx\") pod \"ironic-operator-controller-manager-554564d7fc-f5stb\" (UID: \"5da22390-3c90-4096-8f6b-ac0f8feb4f46\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.223649 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7ktx\" (UniqueName: \"kubernetes.io/projected/e07053fe-da4d-437b-b884-659d18acc903-kube-api-access-j7ktx\") pod \"glance-operator-controller-manager-784b5bb6c5-8dxfx\" (UID: \"e07053fe-da4d-437b-b884-659d18acc903\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.223773 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf4nv\" (UniqueName: \"kubernetes.io/projected/7dbd2798-1deb-4014-9bad-8446f47f49e8-kube-api-access-vf4nv\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.223814 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4gr\" (UniqueName: \"kubernetes.io/projected/7da568d1-374c-4687-8724-ceee9b3857a7-kube-api-access-6j4gr\") pod \"horizon-operator-controller-manager-5b9b8895d5-sl785\" (UID: \"7da568d1-374c-4687-8724-ceee9b3857a7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.223905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.223989 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crl6n\" (UniqueName: \"kubernetes.io/projected/84e2fa7b-8efb-4e72-b6a4-42b10ab15984-kube-api-access-crl6n\") pod \"heat-operator-controller-manager-69f49c598c-4fbcm\" (UID: \"84e2fa7b-8efb-4e72-b6a4-42b10ab15984\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.228627 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.277283 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.278317 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.293901 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-z8htd" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.294673 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7ktx\" (UniqueName: \"kubernetes.io/projected/e07053fe-da4d-437b-b884-659d18acc903-kube-api-access-j7ktx\") pod \"glance-operator-controller-manager-784b5bb6c5-8dxfx\" (UID: \"e07053fe-da4d-437b-b884-659d18acc903\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.300896 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.301897 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.309773 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kjckr" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.316551 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.325353 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.325408 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crl6n\" (UniqueName: \"kubernetes.io/projected/84e2fa7b-8efb-4e72-b6a4-42b10ab15984-kube-api-access-crl6n\") pod \"heat-operator-controller-manager-69f49c598c-4fbcm\" (UID: \"84e2fa7b-8efb-4e72-b6a4-42b10ab15984\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.325465 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcr6v\" (UniqueName: \"kubernetes.io/projected/c863f339-9142-4edc-b547-9bf0fd0d64bc-kube-api-access-kcr6v\") pod \"keystone-operator-controller-manager-b4d948c87-bqhfw\" (UID: \"c863f339-9142-4edc-b547-9bf0fd0d64bc\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.325532 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d4nb\" (UniqueName: \"kubernetes.io/projected/cdca3167-b0ff-41e3-8802-02d92f829aff-kube-api-access-7d4nb\") pod \"manila-operator-controller-manager-67d996989d-xk8mt\" (UID: \"cdca3167-b0ff-41e3-8802-02d92f829aff\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.325570 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trpmx\" (UniqueName: \"kubernetes.io/projected/5da22390-3c90-4096-8f6b-ac0f8feb4f46-kube-api-access-trpmx\") pod \"ironic-operator-controller-manager-554564d7fc-f5stb\" (UID: \"5da22390-3c90-4096-8f6b-ac0f8feb4f46\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.325606 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf4nv\" (UniqueName: \"kubernetes.io/projected/7dbd2798-1deb-4014-9bad-8446f47f49e8-kube-api-access-vf4nv\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.325630 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j4gr\" (UniqueName: \"kubernetes.io/projected/7da568d1-374c-4687-8724-ceee9b3857a7-kube-api-access-6j4gr\") pod \"horizon-operator-controller-manager-5b9b8895d5-sl785\" (UID: \"7da568d1-374c-4687-8724-ceee9b3857a7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" Feb 24 15:09:40 crc kubenswrapper[4982]: E0224 15:09:40.326672 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:40 crc kubenswrapper[4982]: E0224 15:09:40.326741 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert podName:7dbd2798-1deb-4014-9bad-8446f47f49e8 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:40.826720557 +0000 UTC m=+1242.445779050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert") pod "infra-operator-controller-manager-79d975b745-pcpml" (UID: "7dbd2798-1deb-4014-9bad-8446f47f49e8") : secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.348056 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.349448 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.359981 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.360356 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-r8fld" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.376128 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j4gr\" (UniqueName: \"kubernetes.io/projected/7da568d1-374c-4687-8724-ceee9b3857a7-kube-api-access-6j4gr\") pod \"horizon-operator-controller-manager-5b9b8895d5-sl785\" (UID: \"7da568d1-374c-4687-8724-ceee9b3857a7\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.376637 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf4nv\" (UniqueName: \"kubernetes.io/projected/7dbd2798-1deb-4014-9bad-8446f47f49e8-kube-api-access-vf4nv\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.382762 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trpmx\" (UniqueName: \"kubernetes.io/projected/5da22390-3c90-4096-8f6b-ac0f8feb4f46-kube-api-access-trpmx\") pod \"ironic-operator-controller-manager-554564d7fc-f5stb\" (UID: \"5da22390-3c90-4096-8f6b-ac0f8feb4f46\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.383693 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crl6n\" (UniqueName: \"kubernetes.io/projected/84e2fa7b-8efb-4e72-b6a4-42b10ab15984-kube-api-access-crl6n\") pod \"heat-operator-controller-manager-69f49c598c-4fbcm\" (UID: \"84e2fa7b-8efb-4e72-b6a4-42b10ab15984\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.387042 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.420645 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.427092 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.429042 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.430073 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcr6v\" (UniqueName: \"kubernetes.io/projected/c863f339-9142-4edc-b547-9bf0fd0d64bc-kube-api-access-kcr6v\") pod \"keystone-operator-controller-manager-b4d948c87-bqhfw\" (UID: \"c863f339-9142-4edc-b547-9bf0fd0d64bc\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.430113 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b24fw\" (UniqueName: \"kubernetes.io/projected/12f6eea3-aefd-485f-a582-af40549cefa0-kube-api-access-b24fw\") pod \"mariadb-operator-controller-manager-6994f66f48-fwqhl\" (UID: \"12f6eea3-aefd-485f-a582-af40549cefa0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.430162 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d4nb\" (UniqueName: \"kubernetes.io/projected/cdca3167-b0ff-41e3-8802-02d92f829aff-kube-api-access-7d4nb\") pod \"manila-operator-controller-manager-67d996989d-xk8mt\" (UID: \"cdca3167-b0ff-41e3-8802-02d92f829aff\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.434592 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.439808 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bfqch" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.441523 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.448593 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-247rw"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.449742 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.452001 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fxk8r" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.452195 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d4nb\" (UniqueName: \"kubernetes.io/projected/cdca3167-b0ff-41e3-8802-02d92f829aff-kube-api-access-7d4nb\") pod \"manila-operator-controller-manager-67d996989d-xk8mt\" (UID: \"cdca3167-b0ff-41e3-8802-02d92f829aff\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.468438 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.473602 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-247rw"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.473701 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.476578 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-btfbz" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.483880 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.497582 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcr6v\" (UniqueName: \"kubernetes.io/projected/c863f339-9142-4edc-b547-9bf0fd0d64bc-kube-api-access-kcr6v\") pod \"keystone-operator-controller-manager-b4d948c87-bqhfw\" (UID: \"c863f339-9142-4edc-b547-9bf0fd0d64bc\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.503030 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.520857 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.526999 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.528451 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.530615 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-dc4km" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.534039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpfxh\" (UniqueName: \"kubernetes.io/projected/d6146e6e-9a66-43aa-803c-df072ec31d11-kube-api-access-kpfxh\") pod \"nova-operator-controller-manager-567668f5cf-247rw\" (UID: \"d6146e6e-9a66-43aa-803c-df072ec31d11\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.534101 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b24fw\" (UniqueName: \"kubernetes.io/projected/12f6eea3-aefd-485f-a582-af40549cefa0-kube-api-access-b24fw\") pod \"mariadb-operator-controller-manager-6994f66f48-fwqhl\" (UID: \"12f6eea3-aefd-485f-a582-af40549cefa0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.534137 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfthg\" (UniqueName: \"kubernetes.io/projected/5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7-kube-api-access-nfthg\") pod \"neutron-operator-controller-manager-6bd4687957-l8xls\" (UID: \"5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.534189 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qz9k\" (UniqueName: \"kubernetes.io/projected/c0967978-25a6-416a-81be-1153d5f5f74b-kube-api-access-4qz9k\") pod \"octavia-operator-controller-manager-659dc6bbfc-rgfzx\" (UID: \"c0967978-25a6-416a-81be-1153d5f5f74b\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.561284 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.569687 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.570696 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.574203 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.583972 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b24fw\" (UniqueName: \"kubernetes.io/projected/12f6eea3-aefd-485f-a582-af40549cefa0-kube-api-access-b24fw\") pod \"mariadb-operator-controller-manager-6994f66f48-fwqhl\" (UID: \"12f6eea3-aefd-485f-a582-af40549cefa0\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.584325 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5hbcz" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.591401 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.597420 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.598806 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.599942 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.600368 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-495zm" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.630832 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-2tqm7" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.641856 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qz9k\" (UniqueName: \"kubernetes.io/projected/c0967978-25a6-416a-81be-1153d5f5f74b-kube-api-access-4qz9k\") pod \"octavia-operator-controller-manager-659dc6bbfc-rgfzx\" (UID: \"c0967978-25a6-416a-81be-1153d5f5f74b\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.641980 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpfxh\" (UniqueName: \"kubernetes.io/projected/d6146e6e-9a66-43aa-803c-df072ec31d11-kube-api-access-kpfxh\") pod \"nova-operator-controller-manager-567668f5cf-247rw\" (UID: \"d6146e6e-9a66-43aa-803c-df072ec31d11\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.642025 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjr5\" (UniqueName: \"kubernetes.io/projected/d7687408-a30d-42c8-826f-759659e87262-kube-api-access-cnjr5\") pod \"ovn-operator-controller-manager-5955d8c787-pmh6c\" (UID: \"d7687408-a30d-42c8-826f-759659e87262\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.642061 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsk9\" (UniqueName: \"kubernetes.io/projected/2ed106e6-c770-4724-a803-29b4d1b74b6b-kube-api-access-wpsk9\") pod \"placement-operator-controller-manager-8497b45c89-j2q85\" (UID: \"2ed106e6-c770-4724-a803-29b4d1b74b6b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.642087 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8zsc\" (UniqueName: \"kubernetes.io/projected/8670907a-5fad-4602-8578-5eb1a19d1b44-kube-api-access-j8zsc\") pod \"swift-operator-controller-manager-68f46476f-xzh2k\" (UID: \"8670907a-5fad-4602-8578-5eb1a19d1b44\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.642112 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.642146 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfthg\" (UniqueName: \"kubernetes.io/projected/5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7-kube-api-access-nfthg\") pod \"neutron-operator-controller-manager-6bd4687957-l8xls\" (UID: \"5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.642172 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9np2r\" (UniqueName: \"kubernetes.io/projected/c4924244-1803-429b-9c50-8a5c33b1f1b6-kube-api-access-9np2r\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.654133 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.663366 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qz9k\" (UniqueName: \"kubernetes.io/projected/c0967978-25a6-416a-81be-1153d5f5f74b-kube-api-access-4qz9k\") pod \"octavia-operator-controller-manager-659dc6bbfc-rgfzx\" (UID: \"c0967978-25a6-416a-81be-1153d5f5f74b\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.665564 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.673175 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpfxh\" (UniqueName: \"kubernetes.io/projected/d6146e6e-9a66-43aa-803c-df072ec31d11-kube-api-access-kpfxh\") pod \"nova-operator-controller-manager-567668f5cf-247rw\" (UID: \"d6146e6e-9a66-43aa-803c-df072ec31d11\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.677195 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.704968 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.705961 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.705872 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfthg\" (UniqueName: \"kubernetes.io/projected/5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7-kube-api-access-nfthg\") pod \"neutron-operator-controller-manager-6bd4687957-l8xls\" (UID: \"5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.712395 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.725579 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.732521 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.733883 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.735991 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mtqhk" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.743589 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpsk9\" (UniqueName: \"kubernetes.io/projected/2ed106e6-c770-4724-a803-29b4d1b74b6b-kube-api-access-wpsk9\") pod \"placement-operator-controller-manager-8497b45c89-j2q85\" (UID: \"2ed106e6-c770-4724-a803-29b4d1b74b6b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.743671 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.743662 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprx7\" (UniqueName: \"kubernetes.io/projected/95e748d2-45c9-4279-b1b4-9a0d18dce523-kube-api-access-dprx7\") pod \"test-operator-controller-manager-5dc6794d5b-qhlv4\" (UID: \"95e748d2-45c9-4279-b1b4-9a0d18dce523\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.744077 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8zsc\" (UniqueName: \"kubernetes.io/projected/8670907a-5fad-4602-8578-5eb1a19d1b44-kube-api-access-j8zsc\") pod \"swift-operator-controller-manager-68f46476f-xzh2k\" (UID: \"8670907a-5fad-4602-8578-5eb1a19d1b44\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.744107 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.744138 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9np2r\" (UniqueName: \"kubernetes.io/projected/c4924244-1803-429b-9c50-8a5c33b1f1b6-kube-api-access-9np2r\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:40 crc kubenswrapper[4982]: E0224 15:09:40.744809 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:40 crc kubenswrapper[4982]: E0224 15:09:40.744881 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert podName:c4924244-1803-429b-9c50-8a5c33b1f1b6 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:41.244861462 +0000 UTC m=+1242.863919955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" (UID: "c4924244-1803-429b-9c50-8a5c33b1f1b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.746469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjr5\" (UniqueName: \"kubernetes.io/projected/d7687408-a30d-42c8-826f-759659e87262-kube-api-access-cnjr5\") pod \"ovn-operator-controller-manager-5955d8c787-pmh6c\" (UID: \"d7687408-a30d-42c8-826f-759659e87262\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.750653 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.759960 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-w6fnv" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.765068 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpsk9\" (UniqueName: \"kubernetes.io/projected/2ed106e6-c770-4724-a803-29b4d1b74b6b-kube-api-access-wpsk9\") pod \"placement-operator-controller-manager-8497b45c89-j2q85\" (UID: \"2ed106e6-c770-4724-a803-29b4d1b74b6b\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.769081 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.769555 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjr5\" (UniqueName: \"kubernetes.io/projected/d7687408-a30d-42c8-826f-759659e87262-kube-api-access-cnjr5\") pod \"ovn-operator-controller-manager-5955d8c787-pmh6c\" (UID: \"d7687408-a30d-42c8-826f-759659e87262\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.770778 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9np2r\" (UniqueName: \"kubernetes.io/projected/c4924244-1803-429b-9c50-8a5c33b1f1b6-kube-api-access-9np2r\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.775774 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8zsc\" (UniqueName: \"kubernetes.io/projected/8670907a-5fad-4602-8578-5eb1a19d1b44-kube-api-access-j8zsc\") pod \"swift-operator-controller-manager-68f46476f-xzh2k\" (UID: \"8670907a-5fad-4602-8578-5eb1a19d1b44\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.785317 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.843944 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.848727 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dprx7\" (UniqueName: \"kubernetes.io/projected/95e748d2-45c9-4279-b1b4-9a0d18dce523-kube-api-access-dprx7\") pod \"test-operator-controller-manager-5dc6794d5b-qhlv4\" (UID: \"95e748d2-45c9-4279-b1b4-9a0d18dce523\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.848815 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzsr\" (UniqueName: \"kubernetes.io/projected/d4d8baf6-e8b2-4a05-b73e-ca563c3bb172-kube-api-access-qkzsr\") pod \"telemetry-operator-controller-manager-6c7fcb66df-f9pl4\" (UID: \"d4d8baf6-e8b2-4a05-b73e-ca563c3bb172\") " pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.848862 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:40 crc kubenswrapper[4982]: E0224 15:09:40.849031 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:40 crc kubenswrapper[4982]: E0224 15:09:40.849090 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert podName:7dbd2798-1deb-4014-9bad-8446f47f49e8 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:41.849073646 +0000 UTC m=+1243.468132139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert") pod "infra-operator-controller-manager-79d975b745-pcpml" (UID: "7dbd2798-1deb-4014-9bad-8446f47f49e8") : secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.850949 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.852910 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.855401 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dt594" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.860128 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.868227 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.872462 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dprx7\" (UniqueName: \"kubernetes.io/projected/95e748d2-45c9-4279-b1b4-9a0d18dce523-kube-api-access-dprx7\") pod \"test-operator-controller-manager-5dc6794d5b-qhlv4\" (UID: \"95e748d2-45c9-4279-b1b4-9a0d18dce523\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.873391 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.894060 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.951869 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkzsr\" (UniqueName: \"kubernetes.io/projected/d4d8baf6-e8b2-4a05-b73e-ca563c3bb172-kube-api-access-qkzsr\") pod \"telemetry-operator-controller-manager-6c7fcb66df-f9pl4\" (UID: \"d4d8baf6-e8b2-4a05-b73e-ca563c3bb172\") " pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.951964 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjv2\" (UniqueName: \"kubernetes.io/projected/c45b9d77-86d6-4763-b9aa-44549e04016a-kube-api-access-8kjv2\") pod \"watcher-operator-controller-manager-bccc79885-pjtlt\" (UID: \"c45b9d77-86d6-4763-b9aa-44549e04016a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.974080 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkzsr\" (UniqueName: \"kubernetes.io/projected/d4d8baf6-e8b2-4a05-b73e-ca563c3bb172-kube-api-access-qkzsr\") pod \"telemetry-operator-controller-manager-6c7fcb66df-f9pl4\" (UID: \"d4d8baf6-e8b2-4a05-b73e-ca563c3bb172\") " pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.977787 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2"] Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.978980 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.980116 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" event={"ID":"b5bb21b9-5878-4960-9bd6-f46b48419f59","Type":"ContainerStarted","Data":"3d3b022d67c0d20fe1c0377e19152145ddd9d91e5b05d1d7e4bec371f6537e06"} Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.980589 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.985123 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.985546 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 24 15:09:40 crc kubenswrapper[4982]: I0224 15:09:40.985761 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h4z77" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:40.998382 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2"] Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.004960 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.034962 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f"] Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.036050 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.040001 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9nqsb" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.044456 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f"] Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.054585 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kjv2\" (UniqueName: \"kubernetes.io/projected/c45b9d77-86d6-4763-b9aa-44549e04016a-kube-api-access-8kjv2\") pod \"watcher-operator-controller-manager-bccc79885-pjtlt\" (UID: \"c45b9d77-86d6-4763-b9aa-44549e04016a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.054637 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.054662 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glcmx\" (UniqueName: \"kubernetes.io/projected/f03d42ea-ad31-451a-99a7-c1ecc595f924-kube-api-access-glcmx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w8g8f\" (UID: \"f03d42ea-ad31-451a-99a7-c1ecc595f924\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.054713 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.055535 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq2xc\" (UniqueName: \"kubernetes.io/projected/ee761700-7a4b-4f96-8f99-31c55ed51962-kube-api-access-rq2xc\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.071741 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kjv2\" (UniqueName: \"kubernetes.io/projected/c45b9d77-86d6-4763-b9aa-44549e04016a-kube-api-access-8kjv2\") pod \"watcher-operator-controller-manager-bccc79885-pjtlt\" (UID: \"c45b9d77-86d6-4763-b9aa-44549e04016a\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.106239 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.120814 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.143579 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8"] Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.157570 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.157622 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glcmx\" (UniqueName: \"kubernetes.io/projected/f03d42ea-ad31-451a-99a7-c1ecc595f924-kube-api-access-glcmx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w8g8f\" (UID: \"f03d42ea-ad31-451a-99a7-c1ecc595f924\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.157682 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.157743 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq2xc\" (UniqueName: \"kubernetes.io/projected/ee761700-7a4b-4f96-8f99-31c55ed51962-kube-api-access-rq2xc\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.158252 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.158304 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:41.658286899 +0000 UTC m=+1243.277345392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "metrics-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.158540 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.158573 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:41.658563917 +0000 UTC m=+1243.277622410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "webhook-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.182045 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq2xc\" (UniqueName: \"kubernetes.io/projected/ee761700-7a4b-4f96-8f99-31c55ed51962-kube-api-access-rq2xc\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.184409 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glcmx\" (UniqueName: \"kubernetes.io/projected/f03d42ea-ad31-451a-99a7-c1ecc595f924-kube-api-access-glcmx\") pod \"rabbitmq-cluster-operator-manager-668c99d594-w8g8f\" (UID: \"f03d42ea-ad31-451a-99a7-c1ecc595f924\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.191328 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.211704 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v"] Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.271664 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.274203 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.274277 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert podName:c4924244-1803-429b-9c50-8a5c33b1f1b6 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:42.274258656 +0000 UTC m=+1243.893317149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" (UID: "c4924244-1803-429b-9c50-8a5c33b1f1b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.330223 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9"] Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.357320 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm"] Feb 24 15:09:41 crc kubenswrapper[4982]: W0224 15:09:41.359786 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5602df8b_a253_42b4_8b7d_93a3a793fa2a.slice/crio-c731d28e537044f20f44b8e6df4a0ab501c4fc19d20cafee115a33e48c2e3887 WatchSource:0}: Error finding container c731d28e537044f20f44b8e6df4a0ab501c4fc19d20cafee115a33e48c2e3887: Status 404 returned error can't find the container with id c731d28e537044f20f44b8e6df4a0ab501c4fc19d20cafee115a33e48c2e3887 Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.401378 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.682365 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.682690 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.682589 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.682856 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:42.682823022 +0000 UTC m=+1244.301881525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "metrics-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.682892 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.682940 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:42.682925715 +0000 UTC m=+1244.301984298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "webhook-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.886647 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.886905 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: E0224 15:09:41.886980 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert podName:7dbd2798-1deb-4014-9bad-8446f47f49e8 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:43.886947316 +0000 UTC m=+1245.506005809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert") pod "infra-operator-controller-manager-79d975b745-pcpml" (UID: "7dbd2798-1deb-4014-9bad-8446f47f49e8") : secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:41 crc kubenswrapper[4982]: I0224 15:09:41.888833 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.022428 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" event={"ID":"b0f19215-2346-4a5a-8b4a-30f19af5db6c","Type":"ContainerStarted","Data":"93419bbbd59bacbde9d91860cfb0fbdffb471d60cc3e4693f4277a1120e6f0f9"} Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.044901 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" event={"ID":"84e2fa7b-8efb-4e72-b6a4-42b10ab15984","Type":"ContainerStarted","Data":"b8f0b9f010f1f538d6589afd724835b3a56183b078b282708e6a654137ed402c"} Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.051665 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" event={"ID":"e07053fe-da4d-437b-b884-659d18acc903","Type":"ContainerStarted","Data":"0dfdfbd23bc25f1c73282ad95526b0290e18c449673cf8542ced3e55d92eae89"} Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.060468 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.080527 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" event={"ID":"5602df8b-a253-42b4-8b7d-93a3a793fa2a","Type":"ContainerStarted","Data":"c731d28e537044f20f44b8e6df4a0ab501c4fc19d20cafee115a33e48c2e3887"} Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.107013 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.115005 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.129282 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-247rw"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.143625 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.303019 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.303767 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.303869 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert podName:c4924244-1803-429b-9c50-8a5c33b1f1b6 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:44.303847594 +0000 UTC m=+1245.922906077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" (UID: "c4924244-1803-429b-9c50-8a5c33b1f1b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.553378 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.572302 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785"] Feb 24 15:09:42 crc kubenswrapper[4982]: W0224 15:09:42.581417 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0967978_25a6_416a_81be_1153d5f5f74b.slice/crio-62dbf03389d695748ceaf95a9b4dd4bce6a938ba8c22cae631050df126faa68d WatchSource:0}: Error finding container 62dbf03389d695748ceaf95a9b4dd4bce6a938ba8c22cae631050df126faa68d: Status 404 returned error can't find the container with id 62dbf03389d695748ceaf95a9b4dd4bce6a938ba8c22cae631050df126faa68d Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.586259 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls"] Feb 24 15:09:42 crc kubenswrapper[4982]: W0224 15:09:42.586987 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bdfbdfb_c57b_4d3e_9a8f_9b93b18fc6f7.slice/crio-8b3365ed539e515a8db261f268aa6e227ce940021444f2c32abdfb6686c9a646 WatchSource:0}: Error finding container 8b3365ed539e515a8db261f268aa6e227ce940021444f2c32abdfb6686c9a646: Status 404 returned error can't find the container with id 8b3365ed539e515a8db261f268aa6e227ce940021444f2c32abdfb6686c9a646 Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.600470 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.710126 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.710194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.710360 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.710404 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:44.710390081 +0000 UTC m=+1246.329448564 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "webhook-server-cert" not found Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.710708 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.710806 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:44.710780122 +0000 UTC m=+1246.329838675 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "metrics-server-cert" not found Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.813768 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.833817 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85"] Feb 24 15:09:42 crc kubenswrapper[4982]: W0224 15:09:42.844631 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03d42ea_ad31_451a_99a7_c1ecc595f924.slice/crio-5e54ddf68df8e26e77bf5beb1c0a76e3787a1e2c9199b33c618f3501aebcbe1e WatchSource:0}: Error finding container 5e54ddf68df8e26e77bf5beb1c0a76e3787a1e2c9199b33c618f3501aebcbe1e: Status 404 returned error can't find the container with id 5e54ddf68df8e26e77bf5beb1c0a76e3787a1e2c9199b33c618f3501aebcbe1e Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.844743 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.854694 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.862271 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4"] Feb 24 15:09:42 crc kubenswrapper[4982]: I0224 15:09:42.873420 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt"] Feb 24 15:09:42 crc kubenswrapper[4982]: W0224 15:09:42.888314 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95e748d2_45c9_4279_b1b4_9a0d18dce523.slice/crio-79c3f663e22d747436e9ef3a9845507754087f7b364562ce2b7ed745822755dc WatchSource:0}: Error finding container 79c3f663e22d747436e9ef3a9845507754087f7b364562ce2b7ed745822755dc: Status 404 returned error can't find the container with id 79c3f663e22d747436e9ef3a9845507754087f7b364562ce2b7ed745822755dc Feb 24 15:09:42 crc kubenswrapper[4982]: W0224 15:09:42.889009 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d8baf6_e8b2_4a05_b73e_ca563c3bb172.slice/crio-617b189f1b1c9c097d7878b29bbc4eb10214a4f1214150338bc5f1a3b0493c8c WatchSource:0}: Error finding container 617b189f1b1c9c097d7878b29bbc4eb10214a4f1214150338bc5f1a3b0493c8c: Status 404 returned error can't find the container with id 617b189f1b1c9c097d7878b29bbc4eb10214a4f1214150338bc5f1a3b0493c8c Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.898857 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.69:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qkzsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6c7fcb66df-f9pl4_openstack-operators(d4d8baf6-e8b2-4a05-b73e-ca563c3bb172): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.899483 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kjv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-pjtlt_openstack-operators(c45b9d77-86d6-4763-b9aa-44549e04016a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.900442 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" podUID="d4d8baf6-e8b2-4a05-b73e-ca563c3bb172" Feb 24 15:09:42 crc kubenswrapper[4982]: E0224 15:09:42.901366 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" podUID="c45b9d77-86d6-4763-b9aa-44549e04016a" Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.091121 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" event={"ID":"cdca3167-b0ff-41e3-8802-02d92f829aff","Type":"ContainerStarted","Data":"8b7b89200b54bf2e9d288c9c4378ee4bf8733e1e92ca3cf1d2779a3a6f3aeedb"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.097149 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" event={"ID":"7da568d1-374c-4687-8724-ceee9b3857a7","Type":"ContainerStarted","Data":"9409ca4bbef9be8c4ad0c78736816c0018f7678de7e337277e14743fd15e1a4c"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.098602 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" event={"ID":"c0967978-25a6-416a-81be-1153d5f5f74b","Type":"ContainerStarted","Data":"62dbf03389d695748ceaf95a9b4dd4bce6a938ba8c22cae631050df126faa68d"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.100817 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" event={"ID":"2ed106e6-c770-4724-a803-29b4d1b74b6b","Type":"ContainerStarted","Data":"a73a6b0815dea3e583aaa0b2721eab707b67017dca3b724fdda8179fdb9fa3a0"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.103869 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" event={"ID":"f03d42ea-ad31-451a-99a7-c1ecc595f924","Type":"ContainerStarted","Data":"5e54ddf68df8e26e77bf5beb1c0a76e3787a1e2c9199b33c618f3501aebcbe1e"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.109369 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" event={"ID":"d6146e6e-9a66-43aa-803c-df072ec31d11","Type":"ContainerStarted","Data":"896316b88023666e9654c8c26d5d8ccb5cf7fac12166a1bd03f286d3a2a208e3"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.112138 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" event={"ID":"12f6eea3-aefd-485f-a582-af40549cefa0","Type":"ContainerStarted","Data":"21eec0630032efa0da04729d8ec220bfa2d41d5cfe85dc178cc40b922ffe541f"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.114548 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" event={"ID":"c45b9d77-86d6-4763-b9aa-44549e04016a","Type":"ContainerStarted","Data":"7b749d2b4dd97ed852ede1f6c58e7bfd3b56c8eb27bb636e0a50e2e4b298a324"} Feb 24 15:09:43 crc kubenswrapper[4982]: E0224 15:09:43.130121 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" podUID="c45b9d77-86d6-4763-b9aa-44549e04016a" Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.137437 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" event={"ID":"8670907a-5fad-4602-8578-5eb1a19d1b44","Type":"ContainerStarted","Data":"344801f223f9bb5a4865488d7bd2a91cd5d24985e463251c1476046539ba86e9"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.140170 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" event={"ID":"95e748d2-45c9-4279-b1b4-9a0d18dce523","Type":"ContainerStarted","Data":"79c3f663e22d747436e9ef3a9845507754087f7b364562ce2b7ed745822755dc"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.144016 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" event={"ID":"5da22390-3c90-4096-8f6b-ac0f8feb4f46","Type":"ContainerStarted","Data":"1287ecb3f88cf97bb8519fcaceeddd074805f07c4aee87aac24a0b9665f15f0d"} Feb 24 15:09:43 crc kubenswrapper[4982]: E0224 15:09:43.158356 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.69:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" podUID="d4d8baf6-e8b2-4a05-b73e-ca563c3bb172" Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.193138 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" event={"ID":"5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7","Type":"ContainerStarted","Data":"8b3365ed539e515a8db261f268aa6e227ce940021444f2c32abdfb6686c9a646"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.193220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" event={"ID":"c863f339-9142-4edc-b547-9bf0fd0d64bc","Type":"ContainerStarted","Data":"f98dd2f220254182a41f255adaa73672f262106e10b576d6a66b1c848a478358"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.193255 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" event={"ID":"d7687408-a30d-42c8-826f-759659e87262","Type":"ContainerStarted","Data":"9703028e4842cd9c2464fb8a948b005d9c05108a14982e5d5d8b10d1fe51564b"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.193269 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" event={"ID":"d4d8baf6-e8b2-4a05-b73e-ca563c3bb172","Type":"ContainerStarted","Data":"617b189f1b1c9c097d7878b29bbc4eb10214a4f1214150338bc5f1a3b0493c8c"} Feb 24 15:09:43 crc kubenswrapper[4982]: I0224 15:09:43.937429 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:43 crc kubenswrapper[4982]: E0224 15:09:43.938058 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:43 crc kubenswrapper[4982]: E0224 15:09:43.938157 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert podName:7dbd2798-1deb-4014-9bad-8446f47f49e8 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:47.938137499 +0000 UTC m=+1249.557195992 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert") pod "infra-operator-controller-manager-79d975b745-pcpml" (UID: "7dbd2798-1deb-4014-9bad-8446f47f49e8") : secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:44 crc kubenswrapper[4982]: E0224 15:09:44.166913 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" podUID="c45b9d77-86d6-4763-b9aa-44549e04016a" Feb 24 15:09:44 crc kubenswrapper[4982]: E0224 15:09:44.170809 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.69:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" podUID="d4d8baf6-e8b2-4a05-b73e-ca563c3bb172" Feb 24 15:09:44 crc kubenswrapper[4982]: I0224 15:09:44.346239 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:44 crc kubenswrapper[4982]: E0224 15:09:44.346516 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:44 crc kubenswrapper[4982]: E0224 15:09:44.346563 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert podName:c4924244-1803-429b-9c50-8a5c33b1f1b6 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:48.34655032 +0000 UTC m=+1249.965608813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" (UID: "c4924244-1803-429b-9c50-8a5c33b1f1b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:44 crc kubenswrapper[4982]: I0224 15:09:44.758041 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:44 crc kubenswrapper[4982]: I0224 15:09:44.758131 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:44 crc kubenswrapper[4982]: E0224 15:09:44.758312 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 15:09:44 crc kubenswrapper[4982]: E0224 15:09:44.758373 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:48.75835889 +0000 UTC m=+1250.377417383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "webhook-server-cert" not found Feb 24 15:09:44 crc kubenswrapper[4982]: E0224 15:09:44.758309 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 15:09:44 crc kubenswrapper[4982]: E0224 15:09:44.758819 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:48.758782573 +0000 UTC m=+1250.377841126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "metrics-server-cert" not found Feb 24 15:09:47 crc kubenswrapper[4982]: I0224 15:09:47.950121 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:47 crc kubenswrapper[4982]: E0224 15:09:47.950365 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:47 crc kubenswrapper[4982]: E0224 15:09:47.951075 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert podName:7dbd2798-1deb-4014-9bad-8446f47f49e8 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:55.95104577 +0000 UTC m=+1257.570104293 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert") pod "infra-operator-controller-manager-79d975b745-pcpml" (UID: "7dbd2798-1deb-4014-9bad-8446f47f49e8") : secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:48 crc kubenswrapper[4982]: I0224 15:09:48.358180 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:48 crc kubenswrapper[4982]: E0224 15:09:48.358362 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:48 crc kubenswrapper[4982]: E0224 15:09:48.358431 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert podName:c4924244-1803-429b-9c50-8a5c33b1f1b6 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:56.358413451 +0000 UTC m=+1257.977471944 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" (UID: "c4924244-1803-429b-9c50-8a5c33b1f1b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:48 crc kubenswrapper[4982]: I0224 15:09:48.765148 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:48 crc kubenswrapper[4982]: I0224 15:09:48.765226 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:48 crc kubenswrapper[4982]: E0224 15:09:48.765293 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 15:09:48 crc kubenswrapper[4982]: E0224 15:09:48.765373 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:56.765356719 +0000 UTC m=+1258.384415212 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "metrics-server-cert" not found Feb 24 15:09:48 crc kubenswrapper[4982]: E0224 15:09:48.765384 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 15:09:48 crc kubenswrapper[4982]: E0224 15:09:48.765463 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:09:56.765444412 +0000 UTC m=+1258.384502905 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "webhook-server-cert" not found Feb 24 15:09:55 crc kubenswrapper[4982]: E0224 15:09:55.166129 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" Feb 24 15:09:55 crc kubenswrapper[4982]: E0224 15:09:55.166904 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nfthg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-l8xls_openstack-operators(5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:09:55 crc kubenswrapper[4982]: E0224 15:09:55.168042 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" podUID="5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7" Feb 24 15:09:55 crc kubenswrapper[4982]: I0224 15:09:55.995568 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:09:55 crc kubenswrapper[4982]: E0224 15:09:55.995825 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:55 crc kubenswrapper[4982]: E0224 15:09:55.995874 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert podName:7dbd2798-1deb-4014-9bad-8446f47f49e8 nodeName:}" failed. No retries permitted until 2026-02-24 15:10:11.995860725 +0000 UTC m=+1273.614919218 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert") pod "infra-operator-controller-manager-79d975b745-pcpml" (UID: "7dbd2798-1deb-4014-9bad-8446f47f49e8") : secret "infra-operator-webhook-server-cert" not found Feb 24 15:09:56 crc kubenswrapper[4982]: E0224 15:09:56.343996 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" podUID="5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7" Feb 24 15:09:56 crc kubenswrapper[4982]: I0224 15:09:56.402215 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:09:56 crc kubenswrapper[4982]: E0224 15:09:56.402421 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:56 crc kubenswrapper[4982]: E0224 15:09:56.402550 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert podName:c4924244-1803-429b-9c50-8a5c33b1f1b6 nodeName:}" failed. No retries permitted until 2026-02-24 15:10:12.402523156 +0000 UTC m=+1274.021581669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" (UID: "c4924244-1803-429b-9c50-8a5c33b1f1b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 15:09:56 crc kubenswrapper[4982]: I0224 15:09:56.808969 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:56 crc kubenswrapper[4982]: E0224 15:09:56.809416 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 15:09:56 crc kubenswrapper[4982]: I0224 15:09:56.809744 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:09:56 crc kubenswrapper[4982]: E0224 15:09:56.809787 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:10:12.809745743 +0000 UTC m=+1274.428804336 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "webhook-server-cert" not found Feb 24 15:09:56 crc kubenswrapper[4982]: E0224 15:09:56.810151 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 15:09:56 crc kubenswrapper[4982]: E0224 15:09:56.810216 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs podName:ee761700-7a4b-4f96-8f99-31c55ed51962 nodeName:}" failed. No retries permitted until 2026-02-24 15:10:12.810195636 +0000 UTC m=+1274.429254189 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs") pod "openstack-operator-controller-manager-58cc7d798f-pqcz2" (UID: "ee761700-7a4b-4f96-8f99-31c55ed51962") : secret "metrics-server-cert" not found Feb 24 15:09:58 crc kubenswrapper[4982]: E0224 15:09:58.331749 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 24 15:09:58 crc kubenswrapper[4982]: E0224 15:09:58.331934 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6j4gr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-sl785_openstack-operators(7da568d1-374c-4687-8724-ceee9b3857a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:09:58 crc kubenswrapper[4982]: E0224 15:09:58.333322 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" podUID="7da568d1-374c-4687-8724-ceee9b3857a7" Feb 24 15:09:58 crc kubenswrapper[4982]: E0224 15:09:58.960705 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" Feb 24 15:09:58 crc kubenswrapper[4982]: E0224 15:09:58.960904 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7d4nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-xk8mt_openstack-operators(cdca3167-b0ff-41e3-8802-02d92f829aff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:09:58 crc kubenswrapper[4982]: E0224 15:09:58.962110 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" podUID="cdca3167-b0ff-41e3-8802-02d92f829aff" Feb 24 15:09:59 crc kubenswrapper[4982]: E0224 15:09:59.313388 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" podUID="cdca3167-b0ff-41e3-8802-02d92f829aff" Feb 24 15:09:59 crc kubenswrapper[4982]: E0224 15:09:59.315100 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" podUID="7da568d1-374c-4687-8724-ceee9b3857a7" Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.140106 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532430-mxqwb"] Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.141721 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532430-mxqwb" Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.144519 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.144739 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.149795 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532430-mxqwb"] Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.152294 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.289280 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69vbd\" (UniqueName: \"kubernetes.io/projected/3e9778f9-391c-4a7d-b680-09ed29470da1-kube-api-access-69vbd\") pod \"auto-csr-approver-29532430-mxqwb\" (UID: \"3e9778f9-391c-4a7d-b680-09ed29470da1\") " pod="openshift-infra/auto-csr-approver-29532430-mxqwb" Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.390535 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69vbd\" (UniqueName: \"kubernetes.io/projected/3e9778f9-391c-4a7d-b680-09ed29470da1-kube-api-access-69vbd\") pod \"auto-csr-approver-29532430-mxqwb\" (UID: \"3e9778f9-391c-4a7d-b680-09ed29470da1\") " pod="openshift-infra/auto-csr-approver-29532430-mxqwb" Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.413236 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69vbd\" (UniqueName: \"kubernetes.io/projected/3e9778f9-391c-4a7d-b680-09ed29470da1-kube-api-access-69vbd\") pod \"auto-csr-approver-29532430-mxqwb\" (UID: \"3e9778f9-391c-4a7d-b680-09ed29470da1\") " pod="openshift-infra/auto-csr-approver-29532430-mxqwb" Feb 24 15:10:00 crc kubenswrapper[4982]: I0224 15:10:00.464071 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532430-mxqwb" Feb 24 15:10:01 crc kubenswrapper[4982]: E0224 15:10:01.702477 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 24 15:10:01 crc kubenswrapper[4982]: E0224 15:10:01.702694 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8zsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-xzh2k_openstack-operators(8670907a-5fad-4602-8578-5eb1a19d1b44): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:01 crc kubenswrapper[4982]: E0224 15:10:01.703944 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" podUID="8670907a-5fad-4602-8578-5eb1a19d1b44" Feb 24 15:10:02 crc kubenswrapper[4982]: E0224 15:10:02.336494 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" podUID="8670907a-5fad-4602-8578-5eb1a19d1b44" Feb 24 15:10:03 crc kubenswrapper[4982]: E0224 15:10:03.058820 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98" Feb 24 15:10:03 crc kubenswrapper[4982]: E0224 15:10:03.059079 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dprx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-qhlv4_openstack-operators(95e748d2-45c9-4279-b1b4-9a0d18dce523): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:03 crc kubenswrapper[4982]: E0224 15:10:03.060760 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" podUID="95e748d2-45c9-4279-b1b4-9a0d18dce523" Feb 24 15:10:03 crc kubenswrapper[4982]: E0224 15:10:03.344133 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" podUID="95e748d2-45c9-4279-b1b4-9a0d18dce523" Feb 24 15:10:05 crc kubenswrapper[4982]: E0224 15:10:05.942296 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192" Feb 24 15:10:05 crc kubenswrapper[4982]: E0224 15:10:05.942672 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cnjr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5955d8c787-pmh6c_openstack-operators(d7687408-a30d-42c8-826f-759659e87262): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:05 crc kubenswrapper[4982]: E0224 15:10:05.943843 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" podUID="d7687408-a30d-42c8-826f-759659e87262" Feb 24 15:10:06 crc kubenswrapper[4982]: E0224 15:10:06.368370 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" podUID="d7687408-a30d-42c8-826f-759659e87262" Feb 24 15:10:06 crc kubenswrapper[4982]: E0224 15:10:06.640605 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06" Feb 24 15:10:06 crc kubenswrapper[4982]: E0224 15:10:06.640816 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4qz9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-659dc6bbfc-rgfzx_openstack-operators(c0967978-25a6-416a-81be-1153d5f5f74b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:06 crc kubenswrapper[4982]: E0224 15:10:06.642035 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" podUID="c0967978-25a6-416a-81be-1153d5f5f74b" Feb 24 15:10:07 crc kubenswrapper[4982]: E0224 15:10:07.378810 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:c7c7d4228994efb8b93cfabe4d78b40b085d91848dc49db247b7bbca689dae06\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" podUID="c0967978-25a6-416a-81be-1153d5f5f74b" Feb 24 15:10:08 crc kubenswrapper[4982]: E0224 15:10:08.195281 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 24 15:10:08 crc kubenswrapper[4982]: E0224 15:10:08.195456 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wpsk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-j2q85_openstack-operators(2ed106e6-c770-4724-a803-29b4d1b74b6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:08 crc kubenswrapper[4982]: E0224 15:10:08.196691 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" podUID="2ed106e6-c770-4724-a803-29b4d1b74b6b" Feb 24 15:10:08 crc kubenswrapper[4982]: E0224 15:10:08.389310 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" podUID="2ed106e6-c770-4724-a803-29b4d1b74b6b" Feb 24 15:10:08 crc kubenswrapper[4982]: I0224 15:10:08.743150 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:10:08 crc kubenswrapper[4982]: I0224 15:10:08.743628 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:10:09 crc kubenswrapper[4982]: E0224 15:10:09.888944 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 24 15:10:09 crc kubenswrapper[4982]: E0224 15:10:09.889199 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kcr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-bqhfw_openstack-operators(c863f339-9142-4edc-b547-9bf0fd0d64bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:09 crc kubenswrapper[4982]: E0224 15:10:09.890711 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" podUID="c863f339-9142-4edc-b547-9bf0fd0d64bc" Feb 24 15:10:10 crc kubenswrapper[4982]: E0224 15:10:10.405953 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" podUID="c863f339-9142-4edc-b547-9bf0fd0d64bc" Feb 24 15:10:10 crc kubenswrapper[4982]: E0224 15:10:10.415494 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 24 15:10:10 crc kubenswrapper[4982]: E0224 15:10:10.415837 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kpfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-247rw_openstack-operators(d6146e6e-9a66-43aa-803c-df072ec31d11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:10 crc kubenswrapper[4982]: E0224 15:10:10.417667 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" podUID="d6146e6e-9a66-43aa-803c-df072ec31d11" Feb 24 15:10:11 crc kubenswrapper[4982]: E0224 15:10:11.412157 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" podUID="d6146e6e-9a66-43aa-803c-df072ec31d11" Feb 24 15:10:11 crc kubenswrapper[4982]: E0224 15:10:11.543316 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 24 15:10:11 crc kubenswrapper[4982]: E0224 15:10:11.543536 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glcmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-w8g8f_openstack-operators(f03d42ea-ad31-451a-99a7-c1ecc595f924): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:11 crc kubenswrapper[4982]: E0224 15:10:11.544719 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" podUID="f03d42ea-ad31-451a-99a7-c1ecc595f924" Feb 24 15:10:11 crc kubenswrapper[4982]: E0224 15:10:11.630042 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.69:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781" Feb 24 15:10:11 crc kubenswrapper[4982]: E0224 15:10:11.630101 4982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.69:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781" Feb 24 15:10:11 crc kubenswrapper[4982]: E0224 15:10:11.630300 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.69:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qkzsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6c7fcb66df-f9pl4_openstack-operators(d4d8baf6-e8b2-4a05-b73e-ca563c3bb172): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:10:11 crc kubenswrapper[4982]: E0224 15:10:11.632184 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" podUID="d4d8baf6-e8b2-4a05-b73e-ca563c3bb172" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.019005 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.027874 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7dbd2798-1deb-4014-9bad-8446f47f49e8-cert\") pod \"infra-operator-controller-manager-79d975b745-pcpml\" (UID: \"7dbd2798-1deb-4014-9bad-8446f47f49e8\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.070489 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532430-mxqwb"] Feb 24 15:10:12 crc kubenswrapper[4982]: W0224 15:10:12.077259 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e9778f9_391c_4a7d_b680_09ed29470da1.slice/crio-2e92165e54a8b733357802de17c892c79b3ee187d2969884ef34771a1fb578a1 WatchSource:0}: Error finding container 2e92165e54a8b733357802de17c892c79b3ee187d2969884ef34771a1fb578a1: Status 404 returned error can't find the container with id 2e92165e54a8b733357802de17c892c79b3ee187d2969884ef34771a1fb578a1 Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.196849 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-d4gk6" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.204663 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.428933 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" event={"ID":"b5bb21b9-5878-4960-9bd6-f46b48419f59","Type":"ContainerStarted","Data":"f48878ad7b5bb9bb2fee6ed4f4211eb048d6c81e7b464a8f85d061e85fb4f3a4"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.429361 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.439699 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" event={"ID":"5602df8b-a253-42b4-8b7d-93a3a793fa2a","Type":"ContainerStarted","Data":"cb5b784e2266167fbcfa2fc4dd815dd32266166a1807d23d3eab526f144e8c90"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.439833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.440383 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.450247 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4924244-1803-429b-9c50-8a5c33b1f1b6-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c7st96\" (UID: \"c4924244-1803-429b-9c50-8a5c33b1f1b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.453087 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" podStartSLOduration=11.362979977 podStartE2EDuration="33.453072879s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:09:40.950812249 +0000 UTC m=+1242.569870752" lastFinishedPulling="2026-02-24 15:10:03.040905161 +0000 UTC m=+1264.659963654" observedRunningTime="2026-02-24 15:10:12.449926787 +0000 UTC m=+1274.068985290" watchObservedRunningTime="2026-02-24 15:10:12.453072879 +0000 UTC m=+1274.072131372" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.453465 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532430-mxqwb" event={"ID":"3e9778f9-391c-4a7d-b680-09ed29470da1","Type":"ContainerStarted","Data":"2e92165e54a8b733357802de17c892c79b3ee187d2969884ef34771a1fb578a1"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.455272 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" event={"ID":"b0f19215-2346-4a5a-8b4a-30f19af5db6c","Type":"ContainerStarted","Data":"15d23f2c9623dc329a3392e67be297fa9dae7285014004a54b8e87a3cc5080fe"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.455999 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.462452 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" event={"ID":"12f6eea3-aefd-485f-a582-af40549cefa0","Type":"ContainerStarted","Data":"29ec3ff0345b10cff0b94357f733c07297067ab46309bfb745e5f66cca1a6045"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.462534 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.467202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" event={"ID":"5da22390-3c90-4096-8f6b-ac0f8feb4f46","Type":"ContainerStarted","Data":"6cc6471a334bcc2d173bbd4575f98c746e1d8d9527e9c51dead76f31937fc80a"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.468518 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.487959 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" event={"ID":"84e2fa7b-8efb-4e72-b6a4-42b10ab15984","Type":"ContainerStarted","Data":"aca8069421215d1f295fab9070003fcbf9d1664638fa88e035f449fc1ef83c4b"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.488845 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.488972 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5hbcz" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.493922 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" event={"ID":"5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7","Type":"ContainerStarted","Data":"df0501834f6e1c2460d527c5dce62ba71494a3c65b564d6ab8a73d0bba3efd45"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.494410 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.496555 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" podStartSLOduration=5.052355774 podStartE2EDuration="33.496530804s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:09:41.408264998 +0000 UTC m=+1243.027323491" lastFinishedPulling="2026-02-24 15:10:09.852440028 +0000 UTC m=+1271.471498521" observedRunningTime="2026-02-24 15:10:12.484913286 +0000 UTC m=+1274.103971779" watchObservedRunningTime="2026-02-24 15:10:12.496530804 +0000 UTC m=+1274.115589297" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.496673 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.522818 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" podStartSLOduration=5.780607847 podStartE2EDuration="33.522800659s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.11002014 +0000 UTC m=+1243.729078633" lastFinishedPulling="2026-02-24 15:10:09.852212942 +0000 UTC m=+1271.471271445" observedRunningTime="2026-02-24 15:10:12.521430919 +0000 UTC m=+1274.140489442" watchObservedRunningTime="2026-02-24 15:10:12.522800659 +0000 UTC m=+1274.141859152" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.526562 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" event={"ID":"c45b9d77-86d6-4763-b9aa-44549e04016a","Type":"ContainerStarted","Data":"dabb5a857d5b7e9af0b71658e13778ae74cdbe14ccabfaeee8877e0d4dbe4c7f"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.527519 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.588113 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" event={"ID":"e07053fe-da4d-437b-b884-659d18acc903","Type":"ContainerStarted","Data":"b94ab75044a944e18dcad85ae79a319dd1f0ea48be53e148f6c327755451b3d1"} Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.588167 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" Feb 24 15:10:12 crc kubenswrapper[4982]: E0224 15:10:12.590420 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" podUID="f03d42ea-ad31-451a-99a7-c1ecc595f924" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.600050 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" podStartSLOduration=3.730887837 podStartE2EDuration="33.600019817s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:09:41.176436928 +0000 UTC m=+1242.795495431" lastFinishedPulling="2026-02-24 15:10:11.045568878 +0000 UTC m=+1272.664627411" observedRunningTime="2026-02-24 15:10:12.557270143 +0000 UTC m=+1274.176328636" watchObservedRunningTime="2026-02-24 15:10:12.600019817 +0000 UTC m=+1274.219078310" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.621455 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" podStartSLOduration=6.600247499 podStartE2EDuration="32.621433421s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.120145065 +0000 UTC m=+1243.739203558" lastFinishedPulling="2026-02-24 15:10:08.141330967 +0000 UTC m=+1269.760389480" observedRunningTime="2026-02-24 15:10:12.58842162 +0000 UTC m=+1274.207480133" watchObservedRunningTime="2026-02-24 15:10:12.621433421 +0000 UTC m=+1274.240491914" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.663745 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" podStartSLOduration=3.620785268 podStartE2EDuration="32.663718062s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.589250964 +0000 UTC m=+1244.208309457" lastFinishedPulling="2026-02-24 15:10:11.632183718 +0000 UTC m=+1273.251242251" observedRunningTime="2026-02-24 15:10:12.633909664 +0000 UTC m=+1274.252968157" watchObservedRunningTime="2026-02-24 15:10:12.663718062 +0000 UTC m=+1274.282776555" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.698519 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" podStartSLOduration=9.181658705 podStartE2EDuration="33.698487755s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:09:41.404074526 +0000 UTC m=+1243.023133019" lastFinishedPulling="2026-02-24 15:10:05.920903576 +0000 UTC m=+1267.539962069" observedRunningTime="2026-02-24 15:10:12.697726572 +0000 UTC m=+1274.316785065" watchObservedRunningTime="2026-02-24 15:10:12.698487755 +0000 UTC m=+1274.317546248" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.751306 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" podStartSLOduration=4.020894168 podStartE2EDuration="32.751281331s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.899306942 +0000 UTC m=+1244.518365425" lastFinishedPulling="2026-02-24 15:10:11.629694055 +0000 UTC m=+1273.248752588" observedRunningTime="2026-02-24 15:10:12.742800854 +0000 UTC m=+1274.361859347" watchObservedRunningTime="2026-02-24 15:10:12.751281331 +0000 UTC m=+1274.370339834" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.766121 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" podStartSLOduration=4.710072408 podStartE2EDuration="33.766103573s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:09:41.988685927 +0000 UTC m=+1243.607744420" lastFinishedPulling="2026-02-24 15:10:11.044717092 +0000 UTC m=+1272.663775585" observedRunningTime="2026-02-24 15:10:12.765552977 +0000 UTC m=+1274.384611470" watchObservedRunningTime="2026-02-24 15:10:12.766103573 +0000 UTC m=+1274.385162056" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.837872 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-pcpml"] Feb 24 15:10:12 crc kubenswrapper[4982]: W0224 15:10:12.844065 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dbd2798_1deb_4014_9bad_8446f47f49e8.slice/crio-75bf53a55508717ff96b3d5fcd6f70f0873bfb46327e3b1bc5b2c73209d475f8 WatchSource:0}: Error finding container 75bf53a55508717ff96b3d5fcd6f70f0873bfb46327e3b1bc5b2c73209d475f8: Status 404 returned error can't find the container with id 75bf53a55508717ff96b3d5fcd6f70f0873bfb46327e3b1bc5b2c73209d475f8 Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.849475 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.849568 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.853933 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-metrics-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.854003 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ee761700-7a4b-4f96-8f99-31c55ed51962-webhook-certs\") pod \"openstack-operator-controller-manager-58cc7d798f-pqcz2\" (UID: \"ee761700-7a4b-4f96-8f99-31c55ed51962\") " pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.893992 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h4z77" Feb 24 15:10:12 crc kubenswrapper[4982]: I0224 15:10:12.895352 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:10:13 crc kubenswrapper[4982]: I0224 15:10:13.281976 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96"] Feb 24 15:10:13 crc kubenswrapper[4982]: I0224 15:10:13.304428 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2"] Feb 24 15:10:13 crc kubenswrapper[4982]: I0224 15:10:13.638124 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" event={"ID":"ee761700-7a4b-4f96-8f99-31c55ed51962","Type":"ContainerStarted","Data":"b5a602f40efeffc184032052f86ab5905b64cd983dff24d98eceff97c15ffe3b"} Feb 24 15:10:13 crc kubenswrapper[4982]: I0224 15:10:13.664671 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" event={"ID":"c4924244-1803-429b-9c50-8a5c33b1f1b6","Type":"ContainerStarted","Data":"f1429b50f59be8d7e47ebfa631408a701ccf841d5d47ca9dacfd0bf61067b92e"} Feb 24 15:10:13 crc kubenswrapper[4982]: I0224 15:10:13.714686 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" event={"ID":"7da568d1-374c-4687-8724-ceee9b3857a7","Type":"ContainerStarted","Data":"0e45bd6b177596759fa261cd26e78a68e7094e652f5e527aa06a4f93c9c4b8a3"} Feb 24 15:10:13 crc kubenswrapper[4982]: I0224 15:10:13.715838 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" Feb 24 15:10:13 crc kubenswrapper[4982]: I0224 15:10:13.761475 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" podStartSLOduration=4.447186073 podStartE2EDuration="34.761457194s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.561416373 +0000 UTC m=+1244.180474866" lastFinishedPulling="2026-02-24 15:10:12.875687494 +0000 UTC m=+1274.494745987" observedRunningTime="2026-02-24 15:10:13.760120285 +0000 UTC m=+1275.379178768" watchObservedRunningTime="2026-02-24 15:10:13.761457194 +0000 UTC m=+1275.380515687" Feb 24 15:10:13 crc kubenswrapper[4982]: I0224 15:10:13.761647 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" event={"ID":"7dbd2798-1deb-4014-9bad-8446f47f49e8","Type":"ContainerStarted","Data":"75bf53a55508717ff96b3d5fcd6f70f0873bfb46327e3b1bc5b2c73209d475f8"} Feb 24 15:10:14 crc kubenswrapper[4982]: I0224 15:10:14.793400 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" event={"ID":"cdca3167-b0ff-41e3-8802-02d92f829aff","Type":"ContainerStarted","Data":"0dacd19a1c81465fda64532d3330841734a58e48dc1ca885033b645bda51864e"} Feb 24 15:10:14 crc kubenswrapper[4982]: I0224 15:10:14.794982 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" Feb 24 15:10:14 crc kubenswrapper[4982]: I0224 15:10:14.809544 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" event={"ID":"ee761700-7a4b-4f96-8f99-31c55ed51962","Type":"ContainerStarted","Data":"c297ab740699c5fe2072b19216c2414e3737fb5e75e88dab559b321671c33ca3"} Feb 24 15:10:14 crc kubenswrapper[4982]: I0224 15:10:14.812055 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:10:14 crc kubenswrapper[4982]: I0224 15:10:14.820537 4982 generic.go:334] "Generic (PLEG): container finished" podID="3e9778f9-391c-4a7d-b680-09ed29470da1" containerID="ea3ae698b79944d109c5af1612d6c18d27f1bfd0637464c73fc7226a9fb62aad" exitCode=0 Feb 24 15:10:14 crc kubenswrapper[4982]: I0224 15:10:14.821428 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532430-mxqwb" event={"ID":"3e9778f9-391c-4a7d-b680-09ed29470da1","Type":"ContainerDied","Data":"ea3ae698b79944d109c5af1612d6c18d27f1bfd0637464c73fc7226a9fb62aad"} Feb 24 15:10:14 crc kubenswrapper[4982]: I0224 15:10:14.821691 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" podStartSLOduration=2.535424947 podStartE2EDuration="34.821674354s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.15259389 +0000 UTC m=+1243.771652393" lastFinishedPulling="2026-02-24 15:10:14.438843307 +0000 UTC m=+1276.057901800" observedRunningTime="2026-02-24 15:10:14.811334253 +0000 UTC m=+1276.430392746" watchObservedRunningTime="2026-02-24 15:10:14.821674354 +0000 UTC m=+1276.440732867" Feb 24 15:10:14 crc kubenswrapper[4982]: I0224 15:10:14.848146 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" podStartSLOduration=34.848127284 podStartE2EDuration="34.848127284s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:10:14.841695946 +0000 UTC m=+1276.460754449" watchObservedRunningTime="2026-02-24 15:10:14.848127284 +0000 UTC m=+1276.467185777" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.353680 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532430-mxqwb" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.430032 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69vbd\" (UniqueName: \"kubernetes.io/projected/3e9778f9-391c-4a7d-b680-09ed29470da1-kube-api-access-69vbd\") pod \"3e9778f9-391c-4a7d-b680-09ed29470da1\" (UID: \"3e9778f9-391c-4a7d-b680-09ed29470da1\") " Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.452359 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9778f9-391c-4a7d-b680-09ed29470da1-kube-api-access-69vbd" (OuterVolumeSpecName: "kube-api-access-69vbd") pod "3e9778f9-391c-4a7d-b680-09ed29470da1" (UID: "3e9778f9-391c-4a7d-b680-09ed29470da1"). InnerVolumeSpecName "kube-api-access-69vbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.531047 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69vbd\" (UniqueName: \"kubernetes.io/projected/3e9778f9-391c-4a7d-b680-09ed29470da1-kube-api-access-69vbd\") on node \"crc\" DevicePath \"\"" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.871775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" event={"ID":"95e748d2-45c9-4279-b1b4-9a0d18dce523","Type":"ContainerStarted","Data":"b17d31dd5f8ecd4cb14c4df6685eb5dea46b1a7c0e7385cbdf2417e4296a373b"} Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.871980 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.874144 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532430-mxqwb" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.875086 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532430-mxqwb" event={"ID":"3e9778f9-391c-4a7d-b680-09ed29470da1","Type":"ContainerDied","Data":"2e92165e54a8b733357802de17c892c79b3ee187d2969884ef34771a1fb578a1"} Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.875137 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e92165e54a8b733357802de17c892c79b3ee187d2969884ef34771a1fb578a1" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.877732 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" event={"ID":"8670907a-5fad-4602-8578-5eb1a19d1b44","Type":"ContainerStarted","Data":"b78a23ab001ad283a07f90d7f099a59a3888438fb4597b692763749c9c17891c"} Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.878555 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.880586 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" event={"ID":"c4924244-1803-429b-9c50-8a5c33b1f1b6","Type":"ContainerStarted","Data":"398fab607d4db2e08667dfb4535612c5fd3643fb06248831497e633b44bfb613"} Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.880767 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.882365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" event={"ID":"7dbd2798-1deb-4014-9bad-8446f47f49e8","Type":"ContainerStarted","Data":"9c87b1ff3d7622c8e62e9f5c04d2e6a4020d26592cdd8820038e36df0f973bd3"} Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.882592 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.913262 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" podStartSLOduration=3.427390877 podStartE2EDuration="39.91323635s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.897767597 +0000 UTC m=+1244.516826090" lastFinishedPulling="2026-02-24 15:10:19.38361307 +0000 UTC m=+1281.002671563" observedRunningTime="2026-02-24 15:10:19.903661052 +0000 UTC m=+1281.522719555" watchObservedRunningTime="2026-02-24 15:10:19.91323635 +0000 UTC m=+1281.532294843" Feb 24 15:10:19 crc kubenswrapper[4982]: I0224 15:10:19.953787 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" podStartSLOduration=33.896689281 podStartE2EDuration="39.953768641s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:10:13.319548507 +0000 UTC m=+1274.938607000" lastFinishedPulling="2026-02-24 15:10:19.376627867 +0000 UTC m=+1280.995686360" observedRunningTime="2026-02-24 15:10:19.945295924 +0000 UTC m=+1281.564354417" watchObservedRunningTime="2026-02-24 15:10:19.953768641 +0000 UTC m=+1281.572827134" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.002530 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" podStartSLOduration=3.22070479 podStartE2EDuration="40.00251242s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.609257237 +0000 UTC m=+1244.228315730" lastFinishedPulling="2026-02-24 15:10:19.391064847 +0000 UTC m=+1281.010123360" observedRunningTime="2026-02-24 15:10:19.995930118 +0000 UTC m=+1281.614988611" watchObservedRunningTime="2026-02-24 15:10:20.00251242 +0000 UTC m=+1281.621570913" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.043577 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" podStartSLOduration=34.524289568 podStartE2EDuration="41.043560965s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:10:12.8566617 +0000 UTC m=+1274.475720193" lastFinishedPulling="2026-02-24 15:10:19.375933057 +0000 UTC m=+1280.994991590" observedRunningTime="2026-02-24 15:10:20.038540199 +0000 UTC m=+1281.657598702" watchObservedRunningTime="2026-02-24 15:10:20.043560965 +0000 UTC m=+1281.662619458" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.146645 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xdgp8" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.184076 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-rqq7v" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.425838 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-zmxj9" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.426417 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532424-4mpf8"] Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.434770 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-4fbcm" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.435100 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532424-4mpf8"] Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.513964 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-f5stb" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.523485 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-8dxfx" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.680794 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-sl785" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.716038 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.716854 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-fwqhl" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.865995 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-l8xls" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.892778 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" event={"ID":"c0967978-25a6-416a-81be-1153d5f5f74b","Type":"ContainerStarted","Data":"58a15032375c28cfc74aa0081b3ccdc35d16d972d4275db936e63019a84f6231"} Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.893064 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.894686 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" event={"ID":"2ed106e6-c770-4724-a803-29b4d1b74b6b","Type":"ContainerStarted","Data":"9f3e2caab9796f8b82626d7475a6180c87a47cf9cebc8985e2063b04e36d3898"} Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.895580 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.918544 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" podStartSLOduration=2.9077147869999997 podStartE2EDuration="40.918528271s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.584220798 +0000 UTC m=+1244.203279291" lastFinishedPulling="2026-02-24 15:10:20.595034282 +0000 UTC m=+1282.214092775" observedRunningTime="2026-02-24 15:10:20.913078822 +0000 UTC m=+1282.532137325" watchObservedRunningTime="2026-02-24 15:10:20.918528271 +0000 UTC m=+1282.537586764" Feb 24 15:10:20 crc kubenswrapper[4982]: I0224 15:10:20.937628 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" podStartSLOduration=3.133590313 podStartE2EDuration="40.937488103s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.854416815 +0000 UTC m=+1244.473475308" lastFinishedPulling="2026-02-24 15:10:20.658314605 +0000 UTC m=+1282.277373098" observedRunningTime="2026-02-24 15:10:20.931125018 +0000 UTC m=+1282.550183511" watchObservedRunningTime="2026-02-24 15:10:20.937488103 +0000 UTC m=+1282.556546596" Feb 24 15:10:21 crc kubenswrapper[4982]: I0224 15:10:21.157013 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e34c8b9-31ce-4317-9448-afa278a07724" path="/var/lib/kubelet/pods/6e34c8b9-31ce-4317-9448-afa278a07724/volumes" Feb 24 15:10:21 crc kubenswrapper[4982]: I0224 15:10:21.195055 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-pjtlt" Feb 24 15:10:21 crc kubenswrapper[4982]: I0224 15:10:21.905259 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" event={"ID":"d7687408-a30d-42c8-826f-759659e87262","Type":"ContainerStarted","Data":"c9a68fe9dd0eddd6f628c26e6f3170cd3ee29dda84e88b9d24e80ecf6fc76cc1"} Feb 24 15:10:21 crc kubenswrapper[4982]: I0224 15:10:21.905591 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" Feb 24 15:10:21 crc kubenswrapper[4982]: I0224 15:10:21.920693 4982 scope.go:117] "RemoveContainer" containerID="5c04d4c0b5153e3bc2cb4f8c5e01966537b088949fa13ef16466a5117a562180" Feb 24 15:10:21 crc kubenswrapper[4982]: I0224 15:10:21.929982 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" podStartSLOduration=3.203037295 podStartE2EDuration="41.929962601s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.858354679 +0000 UTC m=+1244.477413182" lastFinishedPulling="2026-02-24 15:10:21.585279995 +0000 UTC m=+1283.204338488" observedRunningTime="2026-02-24 15:10:21.924765899 +0000 UTC m=+1283.543824392" watchObservedRunningTime="2026-02-24 15:10:21.929962601 +0000 UTC m=+1283.549021094" Feb 24 15:10:22 crc kubenswrapper[4982]: I0224 15:10:22.901129 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58cc7d798f-pqcz2" Feb 24 15:10:22 crc kubenswrapper[4982]: I0224 15:10:22.916406 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" event={"ID":"c863f339-9142-4edc-b547-9bf0fd0d64bc","Type":"ContainerStarted","Data":"2026c0ec98ac669b36c9a5f02978ba66c4bd2528344089025f8107485b50b7cd"} Feb 24 15:10:22 crc kubenswrapper[4982]: I0224 15:10:22.916720 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" Feb 24 15:10:22 crc kubenswrapper[4982]: I0224 15:10:22.974813 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" podStartSLOduration=3.423284532 podStartE2EDuration="43.974785832s" podCreationTimestamp="2026-02-24 15:09:39 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.080734818 +0000 UTC m=+1243.699793311" lastFinishedPulling="2026-02-24 15:10:22.632236078 +0000 UTC m=+1284.251294611" observedRunningTime="2026-02-24 15:10:22.955955724 +0000 UTC m=+1284.575014227" watchObservedRunningTime="2026-02-24 15:10:22.974785832 +0000 UTC m=+1284.593844345" Feb 24 15:10:23 crc kubenswrapper[4982]: I0224 15:10:23.925897 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" event={"ID":"d6146e6e-9a66-43aa-803c-df072ec31d11","Type":"ContainerStarted","Data":"f703d01f7d8c94cee7fe917a3a7c81f9fa31ce95967c1d11b21b69a816566f8f"} Feb 24 15:10:23 crc kubenswrapper[4982]: I0224 15:10:23.926428 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" Feb 24 15:10:23 crc kubenswrapper[4982]: I0224 15:10:23.945058 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" podStartSLOduration=2.456495758 podStartE2EDuration="43.945037423s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.200141264 +0000 UTC m=+1243.819199757" lastFinishedPulling="2026-02-24 15:10:23.688682929 +0000 UTC m=+1285.307741422" observedRunningTime="2026-02-24 15:10:23.942279123 +0000 UTC m=+1285.561337616" watchObservedRunningTime="2026-02-24 15:10:23.945037423 +0000 UTC m=+1285.564095916" Feb 24 15:10:25 crc kubenswrapper[4982]: E0224 15:10:25.146006 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.69:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" podUID="d4d8baf6-e8b2-4a05-b73e-ca563c3bb172" Feb 24 15:10:28 crc kubenswrapper[4982]: I0224 15:10:28.964541 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" event={"ID":"f03d42ea-ad31-451a-99a7-c1ecc595f924","Type":"ContainerStarted","Data":"687f35008073ba04ad69e6e10f12305100a1b813d67dbf4ab976bca1026731ef"} Feb 24 15:10:28 crc kubenswrapper[4982]: I0224 15:10:28.989318 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-w8g8f" podStartSLOduration=4.132334933 podStartE2EDuration="48.989294624s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.852169629 +0000 UTC m=+1244.471228122" lastFinishedPulling="2026-02-24 15:10:27.70912929 +0000 UTC m=+1289.328187813" observedRunningTime="2026-02-24 15:10:28.978882221 +0000 UTC m=+1290.597940724" watchObservedRunningTime="2026-02-24 15:10:28.989294624 +0000 UTC m=+1290.608353117" Feb 24 15:10:30 crc kubenswrapper[4982]: I0224 15:10:30.681165 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-bqhfw" Feb 24 15:10:30 crc kubenswrapper[4982]: I0224 15:10:30.848869 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-247rw" Feb 24 15:10:30 crc kubenswrapper[4982]: I0224 15:10:30.878869 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" Feb 24 15:10:30 crc kubenswrapper[4982]: I0224 15:10:30.900760 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" Feb 24 15:10:30 crc kubenswrapper[4982]: I0224 15:10:30.983238 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-xzh2k" Feb 24 15:10:31 crc kubenswrapper[4982]: I0224 15:10:31.009133 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j2q85" Feb 24 15:10:31 crc kubenswrapper[4982]: I0224 15:10:31.108951 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-qhlv4" Feb 24 15:10:32 crc kubenswrapper[4982]: I0224 15:10:32.216056 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-pcpml" Feb 24 15:10:32 crc kubenswrapper[4982]: I0224 15:10:32.504101 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c7st96" Feb 24 15:10:37 crc kubenswrapper[4982]: I0224 15:10:37.034484 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" event={"ID":"d4d8baf6-e8b2-4a05-b73e-ca563c3bb172","Type":"ContainerStarted","Data":"2c5a876602633c772615fee130891baea7e48fe46bb593b58bfdbb7422ef5881"} Feb 24 15:10:37 crc kubenswrapper[4982]: I0224 15:10:37.035412 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" Feb 24 15:10:37 crc kubenswrapper[4982]: I0224 15:10:37.056088 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" podStartSLOduration=3.7339365129999997 podStartE2EDuration="57.05606869s" podCreationTimestamp="2026-02-24 15:09:40 +0000 UTC" firstStartedPulling="2026-02-24 15:09:42.898685513 +0000 UTC m=+1244.517744006" lastFinishedPulling="2026-02-24 15:10:36.22081769 +0000 UTC m=+1297.839876183" observedRunningTime="2026-02-24 15:10:37.051545429 +0000 UTC m=+1298.670603952" watchObservedRunningTime="2026-02-24 15:10:37.05606869 +0000 UTC m=+1298.675127183" Feb 24 15:10:38 crc kubenswrapper[4982]: I0224 15:10:38.738660 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:10:38 crc kubenswrapper[4982]: I0224 15:10:38.739051 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:10:41 crc kubenswrapper[4982]: I0224 15:10:41.124358 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6c7fcb66df-f9pl4" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.377877 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gzknf"] Feb 24 15:10:58 crc kubenswrapper[4982]: E0224 15:10:58.378692 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9778f9-391c-4a7d-b680-09ed29470da1" containerName="oc" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.378706 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9778f9-391c-4a7d-b680-09ed29470da1" containerName="oc" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.378866 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9778f9-391c-4a7d-b680-09ed29470da1" containerName="oc" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.379717 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.386649 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.387578 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4jm88" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.387011 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.387133 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.397996 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gzknf"] Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.445438 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9ddw"] Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.447132 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.452243 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.458912 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9ddw"] Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.556567 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85e7659-e0f8-4093-919d-e33963e6c286-config\") pod \"dnsmasq-dns-675f4bcbfc-gzknf\" (UID: \"e85e7659-e0f8-4093-919d-e33963e6c286\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.556943 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.557104 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5x5\" (UniqueName: \"kubernetes.io/projected/e85e7659-e0f8-4093-919d-e33963e6c286-kube-api-access-zp5x5\") pod \"dnsmasq-dns-675f4bcbfc-gzknf\" (UID: \"e85e7659-e0f8-4093-919d-e33963e6c286\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.557222 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-config\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.557416 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whvwv\" (UniqueName: \"kubernetes.io/projected/05655848-0e8b-45b6-88c4-009e52d74f51-kube-api-access-whvwv\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.659194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-config\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.660136 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-config\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.661906 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whvwv\" (UniqueName: \"kubernetes.io/projected/05655848-0e8b-45b6-88c4-009e52d74f51-kube-api-access-whvwv\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.662443 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85e7659-e0f8-4093-919d-e33963e6c286-config\") pod \"dnsmasq-dns-675f4bcbfc-gzknf\" (UID: \"e85e7659-e0f8-4093-919d-e33963e6c286\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.663512 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85e7659-e0f8-4093-919d-e33963e6c286-config\") pod \"dnsmasq-dns-675f4bcbfc-gzknf\" (UID: \"e85e7659-e0f8-4093-919d-e33963e6c286\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.663765 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.664522 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.665056 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5x5\" (UniqueName: \"kubernetes.io/projected/e85e7659-e0f8-4093-919d-e33963e6c286-kube-api-access-zp5x5\") pod \"dnsmasq-dns-675f4bcbfc-gzknf\" (UID: \"e85e7659-e0f8-4093-919d-e33963e6c286\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.686244 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5x5\" (UniqueName: \"kubernetes.io/projected/e85e7659-e0f8-4093-919d-e33963e6c286-kube-api-access-zp5x5\") pod \"dnsmasq-dns-675f4bcbfc-gzknf\" (UID: \"e85e7659-e0f8-4093-919d-e33963e6c286\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.686295 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whvwv\" (UniqueName: \"kubernetes.io/projected/05655848-0e8b-45b6-88c4-009e52d74f51-kube-api-access-whvwv\") pod \"dnsmasq-dns-78dd6ddcc-f9ddw\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.717049 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:10:58 crc kubenswrapper[4982]: I0224 15:10:58.772697 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:10:59 crc kubenswrapper[4982]: I0224 15:10:59.219568 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gzknf"] Feb 24 15:10:59 crc kubenswrapper[4982]: I0224 15:10:59.275959 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" event={"ID":"e85e7659-e0f8-4093-919d-e33963e6c286","Type":"ContainerStarted","Data":"39859e132c1b83e5ca9d6b7027f503b70287648c1243750e4cc883c47c9c8419"} Feb 24 15:10:59 crc kubenswrapper[4982]: I0224 15:10:59.330810 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9ddw"] Feb 24 15:11:00 crc kubenswrapper[4982]: I0224 15:11:00.288046 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" event={"ID":"05655848-0e8b-45b6-88c4-009e52d74f51","Type":"ContainerStarted","Data":"aa90bb27c94f7c8fc7e2394f977bc1f177a9ed50b669a4bb9620f27074d29c5e"} Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.026770 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gzknf"] Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.067237 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vvxd8"] Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.073002 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.094362 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vvxd8"] Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.216116 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-config\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.216194 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hx8\" (UniqueName: \"kubernetes.io/projected/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-kube-api-access-w2hx8\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.216254 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.317681 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-config\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.317759 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2hx8\" (UniqueName: \"kubernetes.io/projected/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-kube-api-access-w2hx8\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.317804 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.318717 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.318847 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-config\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.349483 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2hx8\" (UniqueName: \"kubernetes.io/projected/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-kube-api-access-w2hx8\") pod \"dnsmasq-dns-666b6646f7-vvxd8\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.395243 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9ddw"] Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.403073 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.420235 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghtrh"] Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.423929 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.437162 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghtrh"] Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.520610 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-config\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.520972 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.521036 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fmff\" (UniqueName: \"kubernetes.io/projected/bea504cd-94bd-4526-a1c8-e26fc2bcb918-kube-api-access-5fmff\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.623146 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.623233 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fmff\" (UniqueName: \"kubernetes.io/projected/bea504cd-94bd-4526-a1c8-e26fc2bcb918-kube-api-access-5fmff\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.623315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-config\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.627891 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-config\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.628693 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.662318 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fmff\" (UniqueName: \"kubernetes.io/projected/bea504cd-94bd-4526-a1c8-e26fc2bcb918-kube-api-access-5fmff\") pod \"dnsmasq-dns-57d769cc4f-ghtrh\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:01 crc kubenswrapper[4982]: I0224 15:11:01.774704 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.030213 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vvxd8"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.222065 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.224057 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.227670 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.229357 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.229654 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-42sjd" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.229784 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.230167 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.230310 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.230448 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.260409 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.274296 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.275814 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.286633 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.288414 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.303161 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.313560 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.324958 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" event={"ID":"1d7115b8-e3e3-42b8-bf78-964d9b28ff90","Type":"ContainerStarted","Data":"d241cfe696ea6c73412af705d9185c1c969c4ca88bb6212900162e790f3b6461"} Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343419 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343491 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343539 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343567 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343599 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-config-data\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343621 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqn2v\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-kube-api-access-vqn2v\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343644 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343668 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/513f6549-901c-4faf-9011-af95fe7398ae-pod-info\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-server-conf\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343739 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/511c8aa0-4327-455c-8caa-66bc442d199f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343765 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/513f6549-901c-4faf-9011-af95fe7398ae-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343801 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvwb\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-kube-api-access-mfvwb\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343824 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/511c8aa0-4327-455c-8caa-66bc442d199f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343852 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343886 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343906 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-config-data\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343954 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.343976 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.344002 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.344023 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.344046 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.344068 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: W0224 15:11:02.378404 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbea504cd_94bd_4526_a1c8_e26fc2bcb918.slice/crio-44a71638b4b49056dde35e8063a1d7de2edded9a95ea73a4970a6532991d8dba WatchSource:0}: Error finding container 44a71638b4b49056dde35e8063a1d7de2edded9a95ea73a4970a6532991d8dba: Status 404 returned error can't find the container with id 44a71638b4b49056dde35e8063a1d7de2edded9a95ea73a4970a6532991d8dba Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.379190 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghtrh"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449250 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hglx\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-kube-api-access-6hglx\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449298 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449316 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449336 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449355 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449371 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449389 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449406 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449432 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449685 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449711 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449730 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449746 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449767 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449782 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-config-data\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449801 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqn2v\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-kube-api-access-vqn2v\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449818 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449838 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/513f6549-901c-4faf-9011-af95fe7398ae-pod-info\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449862 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449882 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-server-conf\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449900 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449919 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-config-data\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449937 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.449956 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/511c8aa0-4327-455c-8caa-66bc442d199f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450039 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/513f6549-901c-4faf-9011-af95fe7398ae-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450061 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450100 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450229 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450249 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfvwb\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-kube-api-access-mfvwb\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/511c8aa0-4327-455c-8caa-66bc442d199f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450292 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450316 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-config-data\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.450364 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.451306 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-config-data\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.451622 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.452102 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-config-data\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.452297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.452354 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.453454 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.456015 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.456186 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.456711 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-server-conf\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.458206 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.461175 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.461964 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.462003 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1644dd700f873427ddf54e59bc76644eeebe7e9509c2e4034b86df93b2f0369d/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.463059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/511c8aa0-4327-455c-8caa-66bc442d199f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.470404 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.474648 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/511c8aa0-4327-455c-8caa-66bc442d199f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.475205 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqn2v\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-kube-api-access-vqn2v\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.476992 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/513f6549-901c-4faf-9011-af95fe7398ae-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.479191 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.479530 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.479565 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b9bff04708a599d5f53c572127237f4c9d010110aa6bab1729a82f64f95a892/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.480370 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/513f6549-901c-4faf-9011-af95fe7398ae-pod-info\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.484901 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfvwb\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-kube-api-access-mfvwb\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.518276 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"rabbitmq-server-0\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.535281 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"rabbitmq-server-2\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.548782 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.552767 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.553358 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.560130 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.561905 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.561954 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.562658 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.562830 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.562993 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.563217 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tnb2z" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.563449 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.564445 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.564555 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.564892 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.565157 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.566008 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-server-conf\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.564607 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.569063 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-config-data\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.569937 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-config-data\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.571844 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.572377 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.572434 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.572473 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.572640 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hglx\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-kube-api-access-6hglx\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.572675 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.579673 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.584256 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-pod-info\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.584608 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.589663 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.593050 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.593448 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.593468 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/622162fda3c187b77c125c9b656ad6fabd080bc128e80c097463db866b5ded2c/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.604353 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.614133 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hglx\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-kube-api-access-6hglx\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.623064 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.676759 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.676832 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.676872 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.676926 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746sb\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-kube-api-access-746sb\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.676966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.676991 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.677023 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.677073 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.677121 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.677141 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.677198 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.690938 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"rabbitmq-server-1\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779021 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779104 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779129 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779220 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779275 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779306 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779326 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779385 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746sb\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-kube-api-access-746sb\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779436 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779461 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.779512 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.780965 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.782021 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.784054 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.784899 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.788359 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.788404 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.789083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.802438 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.806124 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.806191 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7b8d62671965b6040285a6764c0fa44cceb017a469216241927199c14219dda8/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.808391 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.810802 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746sb\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-kube-api-access-746sb\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.885718 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"rabbitmq-cell1-server-0\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.939879 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 24 15:11:02 crc kubenswrapper[4982]: I0224 15:11:02.942611 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.421094 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" event={"ID":"bea504cd-94bd-4526-a1c8-e26fc2bcb918","Type":"ContainerStarted","Data":"44a71638b4b49056dde35e8063a1d7de2edded9a95ea73a4970a6532991d8dba"} Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.471743 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.497439 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.703061 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.707252 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.709208 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.714377 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.715880 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.716029 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mt7gp" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.720855 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.729660 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.820099 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2e9dd965-1448-4801-9871-b6d949a7e1e7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.820152 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9dd965-1448-4801-9871-b6d949a7e1e7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.820197 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-config-data-default\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.820225 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-kolla-config\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.820258 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.820274 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9dd965-1448-4801-9871-b6d949a7e1e7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.820306 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vr6\" (UniqueName: \"kubernetes.io/projected/2e9dd965-1448-4801-9871-b6d949a7e1e7-kube-api-access-z7vr6\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.820338 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-17b93c0f-b910-4348-8bf7-da260fe86093\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17b93c0f-b910-4348-8bf7-da260fe86093\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.847589 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:11:03 crc kubenswrapper[4982]: W0224 15:11:03.852874 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e5d5770_3a37_45ef_99d4_f51cfb8e42b4.slice/crio-83f842b007cb2fe779680453e09ac4cf62e43824ccab1ecd7dd7a4e4c4d42b20 WatchSource:0}: Error finding container 83f842b007cb2fe779680453e09ac4cf62e43824ccab1ecd7dd7a4e4c4d42b20: Status 404 returned error can't find the container with id 83f842b007cb2fe779680453e09ac4cf62e43824ccab1ecd7dd7a4e4c4d42b20 Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.922587 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-kolla-config\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.922678 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.922702 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9dd965-1448-4801-9871-b6d949a7e1e7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.922758 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vr6\" (UniqueName: \"kubernetes.io/projected/2e9dd965-1448-4801-9871-b6d949a7e1e7-kube-api-access-z7vr6\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.922804 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-17b93c0f-b910-4348-8bf7-da260fe86093\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17b93c0f-b910-4348-8bf7-da260fe86093\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.922889 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2e9dd965-1448-4801-9871-b6d949a7e1e7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.922925 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9dd965-1448-4801-9871-b6d949a7e1e7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.922987 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-config-data-default\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.924180 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-config-data-default\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.924260 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.925626 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e9dd965-1448-4801-9871-b6d949a7e1e7-kolla-config\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.926035 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2e9dd965-1448-4801-9871-b6d949a7e1e7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.933133 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e9dd965-1448-4801-9871-b6d949a7e1e7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.938994 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.939062 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-17b93c0f-b910-4348-8bf7-da260fe86093\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17b93c0f-b910-4348-8bf7-da260fe86093\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/158dbb3d992254775abfc3b5ca046ec4c9fb664b6af41cb39151cc19bb03e751/globalmount\"" pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.963388 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e9dd965-1448-4801-9871-b6d949a7e1e7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.985460 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vr6\" (UniqueName: \"kubernetes.io/projected/2e9dd965-1448-4801-9871-b6d949a7e1e7-kube-api-access-z7vr6\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:03 crc kubenswrapper[4982]: I0224 15:11:03.994369 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:11:04 crc kubenswrapper[4982]: I0224 15:11:04.021612 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-17b93c0f-b910-4348-8bf7-da260fe86093\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17b93c0f-b910-4348-8bf7-da260fe86093\") pod \"openstack-galera-0\" (UID: \"2e9dd965-1448-4801-9871-b6d949a7e1e7\") " pod="openstack/openstack-galera-0" Feb 24 15:11:04 crc kubenswrapper[4982]: I0224 15:11:04.025904 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 15:11:04 crc kubenswrapper[4982]: I0224 15:11:04.455064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4","Type":"ContainerStarted","Data":"83f842b007cb2fe779680453e09ac4cf62e43824ccab1ecd7dd7a4e4c4d42b20"} Feb 24 15:11:04 crc kubenswrapper[4982]: I0224 15:11:04.457173 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"511c8aa0-4327-455c-8caa-66bc442d199f","Type":"ContainerStarted","Data":"bc85243f9648df10a7ce8e80ea141fb0689a697467300867d7668654a92195c4"} Feb 24 15:11:04 crc kubenswrapper[4982]: I0224 15:11:04.461893 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6","Type":"ContainerStarted","Data":"0a2f945a404222b50fa7c2998f8dd7031fcbda9c30b1a9674e1e79ead812a4fb"} Feb 24 15:11:04 crc kubenswrapper[4982]: I0224 15:11:04.467046 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"513f6549-901c-4faf-9011-af95fe7398ae","Type":"ContainerStarted","Data":"fc8cd992b63ffdbe62c9004d9eebfd642bee0eb9bc70f636237c45bc62eaed59"} Feb 24 15:11:04 crc kubenswrapper[4982]: I0224 15:11:04.690881 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 15:11:04 crc kubenswrapper[4982]: W0224 15:11:04.698191 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e9dd965_1448_4801_9871_b6d949a7e1e7.slice/crio-229d4d65457d8e2e9ab241b28c66bc6d44af9b30a7f4fe6c26760c94f354e19e WatchSource:0}: Error finding container 229d4d65457d8e2e9ab241b28c66bc6d44af9b30a7f4fe6c26760c94f354e19e: Status 404 returned error can't find the container with id 229d4d65457d8e2e9ab241b28c66bc6d44af9b30a7f4fe6c26760c94f354e19e Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.239331 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.243726 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.250428 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-qbrhh" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.251168 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.251618 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.252053 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.252190 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.356069 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.356210 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.356248 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.356273 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj7xw\" (UniqueName: \"kubernetes.io/projected/92994661-7f5e-4171-9e13-f725269e3475-kube-api-access-wj7xw\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.356343 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92994661-7f5e-4171-9e13-f725269e3475-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.356379 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92994661-7f5e-4171-9e13-f725269e3475-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.356398 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92994661-7f5e-4171-9e13-f725269e3475-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.356429 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f4fd3f18-0668-461a-bb35-4345195866b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4fd3f18-0668-461a-bb35-4345195866b2\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.461884 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.463353 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.463427 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.463467 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj7xw\" (UniqueName: \"kubernetes.io/projected/92994661-7f5e-4171-9e13-f725269e3475-kube-api-access-wj7xw\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.463624 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92994661-7f5e-4171-9e13-f725269e3475-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.463661 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92994661-7f5e-4171-9e13-f725269e3475-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.463676 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92994661-7f5e-4171-9e13-f725269e3475-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.463710 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f4fd3f18-0668-461a-bb35-4345195866b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4fd3f18-0668-461a-bb35-4345195866b2\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.464231 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.464263 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/92994661-7f5e-4171-9e13-f725269e3475-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.466111 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.473094 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.473139 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f4fd3f18-0668-461a-bb35-4345195866b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4fd3f18-0668-461a-bb35-4345195866b2\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/989bf0ba69c797d2b7dea82b461fb2999ab4e8453dd3f7fd80a15e94e7e1f14a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.476791 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/92994661-7f5e-4171-9e13-f725269e3475-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.501186 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92994661-7f5e-4171-9e13-f725269e3475-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.506902 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92994661-7f5e-4171-9e13-f725269e3475-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.513062 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj7xw\" (UniqueName: \"kubernetes.io/projected/92994661-7f5e-4171-9e13-f725269e3475-kube-api-access-wj7xw\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.522789 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2e9dd965-1448-4801-9871-b6d949a7e1e7","Type":"ContainerStarted","Data":"229d4d65457d8e2e9ab241b28c66bc6d44af9b30a7f4fe6c26760c94f354e19e"} Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.557052 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f4fd3f18-0668-461a-bb35-4345195866b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4fd3f18-0668-461a-bb35-4345195866b2\") pod \"openstack-cell1-galera-0\" (UID: \"92994661-7f5e-4171-9e13-f725269e3475\") " pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.563371 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.566083 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.573543 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-mk7rp" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.573795 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.574019 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.580245 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.603113 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.669551 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v2th\" (UniqueName: \"kubernetes.io/projected/01f84659-7499-464a-9476-fddfca0dec8a-kube-api-access-5v2th\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.669792 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01f84659-7499-464a-9476-fddfca0dec8a-kolla-config\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.670227 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f84659-7499-464a-9476-fddfca0dec8a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.670668 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f84659-7499-464a-9476-fddfca0dec8a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.670771 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01f84659-7499-464a-9476-fddfca0dec8a-config-data\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.774121 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f84659-7499-464a-9476-fddfca0dec8a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.774964 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f84659-7499-464a-9476-fddfca0dec8a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.775007 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01f84659-7499-464a-9476-fddfca0dec8a-config-data\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.775148 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v2th\" (UniqueName: \"kubernetes.io/projected/01f84659-7499-464a-9476-fddfca0dec8a-kube-api-access-5v2th\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.775197 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01f84659-7499-464a-9476-fddfca0dec8a-kolla-config\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.776819 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01f84659-7499-464a-9476-fddfca0dec8a-config-data\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.777465 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01f84659-7499-464a-9476-fddfca0dec8a-kolla-config\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.782564 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f84659-7499-464a-9476-fddfca0dec8a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.791105 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f84659-7499-464a-9476-fddfca0dec8a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.796304 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v2th\" (UniqueName: \"kubernetes.io/projected/01f84659-7499-464a-9476-fddfca0dec8a-kube-api-access-5v2th\") pod \"memcached-0\" (UID: \"01f84659-7499-464a-9476-fddfca0dec8a\") " pod="openstack/memcached-0" Feb 24 15:11:05 crc kubenswrapper[4982]: I0224 15:11:05.930874 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 15:11:06 crc kubenswrapper[4982]: I0224 15:11:06.607482 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 15:11:06 crc kubenswrapper[4982]: I0224 15:11:06.775774 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 15:11:07 crc kubenswrapper[4982]: I0224 15:11:07.590266 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"92994661-7f5e-4171-9e13-f725269e3475","Type":"ContainerStarted","Data":"ce6d20ee865d73ee99802fc4155e725ea8d1b0ce4d006c10c5c1f8bb31f39299"} Feb 24 15:11:07 crc kubenswrapper[4982]: I0224 15:11:07.596742 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"01f84659-7499-464a-9476-fddfca0dec8a","Type":"ContainerStarted","Data":"62d429cfb16432f0b6b6baaca5c004458e797e4e888598c778bbd9f14b6e8e73"} Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.060586 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.061816 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.064801 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jjjv4" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.086090 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.143905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlcc8\" (UniqueName: \"kubernetes.io/projected/3cbd28d0-4f27-43d5-86cb-8fbd471d4098-kube-api-access-vlcc8\") pod \"kube-state-metrics-0\" (UID: \"3cbd28d0-4f27-43d5-86cb-8fbd471d4098\") " pod="openstack/kube-state-metrics-0" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.246752 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlcc8\" (UniqueName: \"kubernetes.io/projected/3cbd28d0-4f27-43d5-86cb-8fbd471d4098-kube-api-access-vlcc8\") pod \"kube-state-metrics-0\" (UID: \"3cbd28d0-4f27-43d5-86cb-8fbd471d4098\") " pod="openstack/kube-state-metrics-0" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.302208 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlcc8\" (UniqueName: \"kubernetes.io/projected/3cbd28d0-4f27-43d5-86cb-8fbd471d4098-kube-api-access-vlcc8\") pod \"kube-state-metrics-0\" (UID: \"3cbd28d0-4f27-43d5-86cb-8fbd471d4098\") " pod="openstack/kube-state-metrics-0" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.429109 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.738178 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.738545 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.738590 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.739303 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5be665899696d5c8fd21b8f8f600a79f59d38e14863a16150fbf781e7134602b"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:11:08 crc kubenswrapper[4982]: I0224 15:11:08.739360 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://5be665899696d5c8fd21b8f8f600a79f59d38e14863a16150fbf781e7134602b" gracePeriod=600 Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.023734 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj"] Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.025640 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.030707 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.034242 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-kpkmf" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.053004 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj"] Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.086592 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlfs6\" (UniqueName: \"kubernetes.io/projected/10e410a1-e886-451e-9cfc-40f6812a4d0d-kube-api-access-xlfs6\") pod \"observability-ui-dashboards-66cbf594b5-m6fxj\" (UID: \"10e410a1-e886-451e-9cfc-40f6812a4d0d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.086666 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e410a1-e886-451e-9cfc-40f6812a4d0d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m6fxj\" (UID: \"10e410a1-e886-451e-9cfc-40f6812a4d0d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.188540 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlfs6\" (UniqueName: \"kubernetes.io/projected/10e410a1-e886-451e-9cfc-40f6812a4d0d-kube-api-access-xlfs6\") pod \"observability-ui-dashboards-66cbf594b5-m6fxj\" (UID: \"10e410a1-e886-451e-9cfc-40f6812a4d0d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.188898 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e410a1-e886-451e-9cfc-40f6812a4d0d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m6fxj\" (UID: \"10e410a1-e886-451e-9cfc-40f6812a4d0d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: E0224 15:11:09.189062 4982 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 24 15:11:09 crc kubenswrapper[4982]: E0224 15:11:09.189116 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10e410a1-e886-451e-9cfc-40f6812a4d0d-serving-cert podName:10e410a1-e886-451e-9cfc-40f6812a4d0d nodeName:}" failed. No retries permitted until 2026-02-24 15:11:09.689097237 +0000 UTC m=+1331.308155730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/10e410a1-e886-451e-9cfc-40f6812a4d0d-serving-cert") pod "observability-ui-dashboards-66cbf594b5-m6fxj" (UID: "10e410a1-e886-451e-9cfc-40f6812a4d0d") : secret "observability-ui-dashboards" not found Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.267670 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlfs6\" (UniqueName: \"kubernetes.io/projected/10e410a1-e886-451e-9cfc-40f6812a4d0d-kube-api-access-xlfs6\") pod \"observability-ui-dashboards-66cbf594b5-m6fxj\" (UID: \"10e410a1-e886-451e-9cfc-40f6812a4d0d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.495909 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-846fcbcdcb-cdqlq"] Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.497196 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.523306 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-846fcbcdcb-cdqlq"] Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.591996 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.624730 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.627383 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.627411 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.632768 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.633065 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.633191 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-sdt6c" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.633334 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.634886 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.634996 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.635110 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.640944 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-trusted-ca-bundle\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.641003 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-service-ca\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.641124 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfnk\" (UniqueName: \"kubernetes.io/projected/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-kube-api-access-qcfnk\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.641161 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-oauth-serving-cert\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.641195 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-oauth-config\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.641284 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-config\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.641301 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-serving-cert\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.651898 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.713604 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="5be665899696d5c8fd21b8f8f600a79f59d38e14863a16150fbf781e7134602b" exitCode=0 Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.713652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"5be665899696d5c8fd21b8f8f600a79f59d38e14863a16150fbf781e7134602b"} Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.713681 4982 scope.go:117] "RemoveContainer" containerID="aa114d36019ce5147a672d1a1ffc47f09215e89baaa3aa8a9c736d89a2586a36" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745555 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745628 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-config\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745653 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-serving-cert\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745689 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745733 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e410a1-e886-451e-9cfc-40f6812a4d0d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m6fxj\" (UID: \"10e410a1-e886-451e-9cfc-40f6812a4d0d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745758 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745789 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d25d513-4841-4cbf-9e48-7ce1494d6450-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745818 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745877 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-trusted-ca-bundle\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745917 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-service-ca\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.745959 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9rrc\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-kube-api-access-d9rrc\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.746010 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.746067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcfnk\" (UniqueName: \"kubernetes.io/projected/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-kube-api-access-qcfnk\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.746108 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-oauth-serving-cert\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.746133 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.746165 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.746187 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-oauth-config\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.746215 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.748565 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-config\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.749916 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-service-ca\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.751036 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-oauth-serving-cert\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.751240 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-trusted-ca-bundle\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.762156 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e410a1-e886-451e-9cfc-40f6812a4d0d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m6fxj\" (UID: \"10e410a1-e886-451e-9cfc-40f6812a4d0d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.772403 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-oauth-config\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.775291 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcfnk\" (UniqueName: \"kubernetes.io/projected/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-kube-api-access-qcfnk\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.785287 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/113c0a67-b3b4-44c0-b1a4-26e0e9337d63-console-serving-cert\") pod \"console-846fcbcdcb-cdqlq\" (UID: \"113c0a67-b3b4-44c0-b1a4-26e0e9337d63\") " pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.847675 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9rrc\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-kube-api-access-d9rrc\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.847739 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.847809 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.847840 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.847872 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.847917 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.848053 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.848108 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.848143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d25d513-4841-4cbf-9e48-7ce1494d6450-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.848171 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.849209 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.849282 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.849612 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.853190 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.854385 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.854405 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d25d513-4841-4cbf-9e48-7ce1494d6450-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.855136 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.855168 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/974781c848519588b21abb088da5a5ce03d5802a8189d06d25b9b3e41a1054bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.856513 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.861233 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.862021 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.873582 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9rrc\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-kube-api-access-d9rrc\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.910564 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"prometheus-metric-storage-0\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.969627 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" Feb 24 15:11:09 crc kubenswrapper[4982]: I0224 15:11:09.970780 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.701219 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xcvbd"] Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.703025 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.707635 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-bl4qx" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.707996 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.708059 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.710571 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xcvbd"] Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.742189 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gsdm4"] Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.747671 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.764410 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gsdm4"] Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.877692 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-lib\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.877752 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-run\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.877788 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c21939f-82d2-4553-acbd-b570e4d1527c-combined-ca-bundle\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.877825 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c21939f-82d2-4553-acbd-b570e4d1527c-scripts\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.877880 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-run-ovn\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.877924 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-run\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.877956 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-log-ovn\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.877973 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhxdx\" (UniqueName: \"kubernetes.io/projected/2c21939f-82d2-4553-acbd-b570e4d1527c-kube-api-access-mhxdx\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.878006 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-log\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.878040 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v9z2\" (UniqueName: \"kubernetes.io/projected/8534c002-8446-4b80-ae93-6d529c59d1df-kube-api-access-4v9z2\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.878061 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-etc-ovs\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.878078 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8534c002-8446-4b80-ae93-6d529c59d1df-scripts\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.878091 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c21939f-82d2-4553-acbd-b570e4d1527c-ovn-controller-tls-certs\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979491 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c21939f-82d2-4553-acbd-b570e4d1527c-scripts\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979577 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-run-ovn\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979619 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-run\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979644 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-log-ovn\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979666 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhxdx\" (UniqueName: \"kubernetes.io/projected/2c21939f-82d2-4553-acbd-b570e4d1527c-kube-api-access-mhxdx\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979701 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-log\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979749 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v9z2\" (UniqueName: \"kubernetes.io/projected/8534c002-8446-4b80-ae93-6d529c59d1df-kube-api-access-4v9z2\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979772 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-etc-ovs\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979787 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8534c002-8446-4b80-ae93-6d529c59d1df-scripts\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979800 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c21939f-82d2-4553-acbd-b570e4d1527c-ovn-controller-tls-certs\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979829 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-lib\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979850 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-run\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.979877 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c21939f-82d2-4553-acbd-b570e4d1527c-combined-ca-bundle\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.980754 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-log\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.980867 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-log-ovn\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.980941 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-run\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.981315 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-run-ovn\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.981349 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-etc-ovs\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.981472 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8534c002-8446-4b80-ae93-6d529c59d1df-var-lib\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.981487 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c21939f-82d2-4553-acbd-b570e4d1527c-var-run\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.982317 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8534c002-8446-4b80-ae93-6d529c59d1df-scripts\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.982776 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c21939f-82d2-4553-acbd-b570e4d1527c-scripts\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.997760 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c21939f-82d2-4553-acbd-b570e4d1527c-combined-ca-bundle\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:10 crc kubenswrapper[4982]: I0224 15:11:10.998037 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c21939f-82d2-4553-acbd-b570e4d1527c-ovn-controller-tls-certs\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.009541 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhxdx\" (UniqueName: \"kubernetes.io/projected/2c21939f-82d2-4553-acbd-b570e4d1527c-kube-api-access-mhxdx\") pod \"ovn-controller-xcvbd\" (UID: \"2c21939f-82d2-4553-acbd-b570e4d1527c\") " pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.021770 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v9z2\" (UniqueName: \"kubernetes.io/projected/8534c002-8446-4b80-ae93-6d529c59d1df-kube-api-access-4v9z2\") pod \"ovn-controller-ovs-gsdm4\" (UID: \"8534c002-8446-4b80-ae93-6d529c59d1df\") " pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.049002 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.083978 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.615231 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.620261 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.623167 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.623490 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.623593 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.623718 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.623798 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vh8bw" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.633203 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.804882 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.805913 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz46n\" (UniqueName: \"kubernetes.io/projected/83806fad-ace2-4023-9023-88d534d78650-kube-api-access-kz46n\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.805939 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83806fad-ace2-4023-9023-88d534d78650-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.806003 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83806fad-ace2-4023-9023-88d534d78650-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.806021 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83806fad-ace2-4023-9023-88d534d78650-config\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.806292 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-24de8f2e-7e0b-4fac-a1fd-8a512b00d4ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24de8f2e-7e0b-4fac-a1fd-8a512b00d4ba\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.806960 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.807104 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.909420 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz46n\" (UniqueName: \"kubernetes.io/projected/83806fad-ace2-4023-9023-88d534d78650-kube-api-access-kz46n\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.909464 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83806fad-ace2-4023-9023-88d534d78650-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.909537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83806fad-ace2-4023-9023-88d534d78650-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.909569 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83806fad-ace2-4023-9023-88d534d78650-config\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.910057 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83806fad-ace2-4023-9023-88d534d78650-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.910638 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83806fad-ace2-4023-9023-88d534d78650-config\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.910705 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83806fad-ace2-4023-9023-88d534d78650-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.910759 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-24de8f2e-7e0b-4fac-a1fd-8a512b00d4ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24de8f2e-7e0b-4fac-a1fd-8a512b00d4ba\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.910880 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.910913 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.910963 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.922689 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.922689 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.923169 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.923211 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-24de8f2e-7e0b-4fac-a1fd-8a512b00d4ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24de8f2e-7e0b-4fac-a1fd-8a512b00d4ba\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fefa3c0cfef5718d37f638c2c2a59f32d6e203f4af2b425baee3689be035f7ff/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.923377 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83806fad-ace2-4023-9023-88d534d78650-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.925418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz46n\" (UniqueName: \"kubernetes.io/projected/83806fad-ace2-4023-9023-88d534d78650-kube-api-access-kz46n\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:11 crc kubenswrapper[4982]: I0224 15:11:11.967879 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-24de8f2e-7e0b-4fac-a1fd-8a512b00d4ba\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24de8f2e-7e0b-4fac-a1fd-8a512b00d4ba\") pod \"ovsdbserver-nb-0\" (UID: \"83806fad-ace2-4023-9023-88d534d78650\") " pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:12 crc kubenswrapper[4982]: I0224 15:11:12.266176 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.270207 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.273638 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.279413 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.279757 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.279885 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.280179 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-jln2l" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.286082 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.400186 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2dpf\" (UniqueName: \"kubernetes.io/projected/43d8feff-fc48-4fcc-86f8-ce96094eded1-kube-api-access-f2dpf\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.400262 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e78dc9ef-3171-48e8-8d46-b16bb1a0e20f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e78dc9ef-3171-48e8-8d46-b16bb1a0e20f\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.400440 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.400530 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d8feff-fc48-4fcc-86f8-ce96094eded1-config\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.400651 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43d8feff-fc48-4fcc-86f8-ce96094eded1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.400742 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.400862 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43d8feff-fc48-4fcc-86f8-ce96094eded1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.400916 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504082 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43d8feff-fc48-4fcc-86f8-ce96094eded1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504202 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2dpf\" (UniqueName: \"kubernetes.io/projected/43d8feff-fc48-4fcc-86f8-ce96094eded1-kube-api-access-f2dpf\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504379 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e78dc9ef-3171-48e8-8d46-b16bb1a0e20f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e78dc9ef-3171-48e8-8d46-b16bb1a0e20f\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504430 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504484 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d8feff-fc48-4fcc-86f8-ce96094eded1-config\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504605 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43d8feff-fc48-4fcc-86f8-ce96094eded1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504744 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.504863 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/43d8feff-fc48-4fcc-86f8-ce96094eded1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.505456 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d8feff-fc48-4fcc-86f8-ce96094eded1-config\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.506442 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43d8feff-fc48-4fcc-86f8-ce96094eded1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.509690 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.509767 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e78dc9ef-3171-48e8-8d46-b16bb1a0e20f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e78dc9ef-3171-48e8-8d46-b16bb1a0e20f\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3b0394445f9ecbd7b0a0504aa4ea6f1776de868149ec3b031f0db3cbdc100668/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.511021 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.512011 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.513657 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d8feff-fc48-4fcc-86f8-ce96094eded1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.523111 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2dpf\" (UniqueName: \"kubernetes.io/projected/43d8feff-fc48-4fcc-86f8-ce96094eded1-kube-api-access-f2dpf\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.552137 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e78dc9ef-3171-48e8-8d46-b16bb1a0e20f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e78dc9ef-3171-48e8-8d46-b16bb1a0e20f\") pod \"ovsdbserver-sb-0\" (UID: \"43d8feff-fc48-4fcc-86f8-ce96094eded1\") " pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:15 crc kubenswrapper[4982]: I0224 15:11:15.605928 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:21 crc kubenswrapper[4982]: W0224 15:11:21.644118 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cbd28d0_4f27_43d5_86cb_8fbd471d4098.slice/crio-3c161bbc4b1d0338ac2e7cde767c2c6324d59fcdd81ad4d0509cb5f0abb34d03 WatchSource:0}: Error finding container 3c161bbc4b1d0338ac2e7cde767c2c6324d59fcdd81ad4d0509cb5f0abb34d03: Status 404 returned error can't find the container with id 3c161bbc4b1d0338ac2e7cde767c2c6324d59fcdd81ad4d0509cb5f0abb34d03 Feb 24 15:11:21 crc kubenswrapper[4982]: I0224 15:11:21.873201 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3cbd28d0-4f27-43d5-86cb-8fbd471d4098","Type":"ContainerStarted","Data":"3c161bbc4b1d0338ac2e7cde767c2c6324d59fcdd81ad4d0509cb5f0abb34d03"} Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.817126 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.817468 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-746sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.819577 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.869755 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.869912 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqn2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(511c8aa0-4327-455c-8caa-66bc442d199f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.871229 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.882634 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.882846 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.911864 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.912030 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfvwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(513f6549-901c-4faf-9011-af95fe7398ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:11:22 crc kubenswrapper[4982]: E0224 15:11:22.913287 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="513f6549-901c-4faf-9011-af95fe7398ae" Feb 24 15:11:23 crc kubenswrapper[4982]: E0224 15:11:23.893265 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="513f6549-901c-4faf-9011-af95fe7398ae" Feb 24 15:11:34 crc kubenswrapper[4982]: I0224 15:11:34.010019 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-846fcbcdcb-cdqlq"] Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.409547 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.410283 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp5x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-gzknf_openstack(e85e7659-e0f8-4093-919d-e33963e6c286): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.413951 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" podUID="e85e7659-e0f8-4093-919d-e33963e6c286" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.426646 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.426857 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fmff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-ghtrh_openstack(bea504cd-94bd-4526-a1c8-e26fc2bcb918): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.428236 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" podUID="bea504cd-94bd-4526-a1c8-e26fc2bcb918" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.458780 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.458951 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-whvwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-f9ddw_openstack(05655848-0e8b-45b6-88c4-009e52d74f51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.464215 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" podUID="05655848-0e8b-45b6-88c4-009e52d74f51" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.629323 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.629737 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2hx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-vvxd8_openstack(1d7115b8-e3e3-42b8-bf78-964d9b28ff90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:11:34 crc kubenswrapper[4982]: E0224 15:11:34.630939 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" podUID="1d7115b8-e3e3-42b8-bf78-964d9b28ff90" Feb 24 15:11:34 crc kubenswrapper[4982]: I0224 15:11:34.871583 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xcvbd"] Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.007280 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846fcbcdcb-cdqlq" event={"ID":"113c0a67-b3b4-44c0-b1a4-26e0e9337d63","Type":"ContainerStarted","Data":"bed9cbb530a788acf83563ab54ed6d422327e893e3877c3e7773e93fe2ec9d2b"} Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.010945 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"6f34013f8efc594f596b2fce7ffa43397c7edb98c85b96c1bc1ccf7f7c29f143"} Feb 24 15:11:35 crc kubenswrapper[4982]: E0224 15:11:35.015034 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" podUID="bea504cd-94bd-4526-a1c8-e26fc2bcb918" Feb 24 15:11:35 crc kubenswrapper[4982]: E0224 15:11:35.015684 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" podUID="1d7115b8-e3e3-42b8-bf78-964d9b28ff90" Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.482444 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.709264 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj"] Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.793452 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.890153 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85e7659-e0f8-4093-919d-e33963e6c286-config\") pod \"e85e7659-e0f8-4093-919d-e33963e6c286\" (UID: \"e85e7659-e0f8-4093-919d-e33963e6c286\") " Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.890394 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp5x5\" (UniqueName: \"kubernetes.io/projected/e85e7659-e0f8-4093-919d-e33963e6c286-kube-api-access-zp5x5\") pod \"e85e7659-e0f8-4093-919d-e33963e6c286\" (UID: \"e85e7659-e0f8-4093-919d-e33963e6c286\") " Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.892146 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85e7659-e0f8-4093-919d-e33963e6c286-config" (OuterVolumeSpecName: "config") pod "e85e7659-e0f8-4093-919d-e33963e6c286" (UID: "e85e7659-e0f8-4093-919d-e33963e6c286"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.898274 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85e7659-e0f8-4093-919d-e33963e6c286-kube-api-access-zp5x5" (OuterVolumeSpecName: "kube-api-access-zp5x5") pod "e85e7659-e0f8-4093-919d-e33963e6c286" (UID: "e85e7659-e0f8-4093-919d-e33963e6c286"). InnerVolumeSpecName "kube-api-access-zp5x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.902163 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.963863 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.993245 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp5x5\" (UniqueName: \"kubernetes.io/projected/e85e7659-e0f8-4093-919d-e33963e6c286-kube-api-access-zp5x5\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:35 crc kubenswrapper[4982]: I0224 15:11:35.993289 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85e7659-e0f8-4093-919d-e33963e6c286-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.077924 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" event={"ID":"10e410a1-e886-451e-9cfc-40f6812a4d0d","Type":"ContainerStarted","Data":"85f681c920f72c521fb176522ddee04349b97441e1d154d4da2a292b1a98cece"} Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.081057 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerStarted","Data":"c0a23b33c481a8858f1121d311fddd2f2b8283da7878ab052d4377098cabb01f"} Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.082415 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.082419 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-f9ddw" event={"ID":"05655848-0e8b-45b6-88c4-009e52d74f51","Type":"ContainerDied","Data":"aa90bb27c94f7c8fc7e2394f977bc1f177a9ed50b669a4bb9620f27074d29c5e"} Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.084915 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"01f84659-7499-464a-9476-fddfca0dec8a","Type":"ContainerStarted","Data":"9f33cdd3f5a778062bf599d835683bbce4d032664f5453618b30edb0ff246728"} Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.085068 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.088571 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" event={"ID":"e85e7659-e0f8-4093-919d-e33963e6c286","Type":"ContainerDied","Data":"39859e132c1b83e5ca9d6b7027f503b70287648c1243750e4cc883c47c9c8419"} Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.088585 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gzknf" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.089787 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"43d8feff-fc48-4fcc-86f8-ce96094eded1","Type":"ContainerStarted","Data":"d339bbb6df8d5e9302c562444edfbe78e12de26d1a04e8bddd472d474eb72744"} Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.091107 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xcvbd" event={"ID":"2c21939f-82d2-4553-acbd-b570e4d1527c","Type":"ContainerStarted","Data":"b109584d952927c8eaa5b456337d2770a9f9fda9ce04463dc5862e8f9ef807fd"} Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.094023 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whvwv\" (UniqueName: \"kubernetes.io/projected/05655848-0e8b-45b6-88c4-009e52d74f51-kube-api-access-whvwv\") pod \"05655848-0e8b-45b6-88c4-009e52d74f51\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.094086 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-config\") pod \"05655848-0e8b-45b6-88c4-009e52d74f51\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.094143 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-dns-svc\") pod \"05655848-0e8b-45b6-88c4-009e52d74f51\" (UID: \"05655848-0e8b-45b6-88c4-009e52d74f51\") " Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.095104 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-config" (OuterVolumeSpecName: "config") pod "05655848-0e8b-45b6-88c4-009e52d74f51" (UID: "05655848-0e8b-45b6-88c4-009e52d74f51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.095168 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05655848-0e8b-45b6-88c4-009e52d74f51" (UID: "05655848-0e8b-45b6-88c4-009e52d74f51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.105425 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05655848-0e8b-45b6-88c4-009e52d74f51-kube-api-access-whvwv" (OuterVolumeSpecName: "kube-api-access-whvwv") pod "05655848-0e8b-45b6-88c4-009e52d74f51" (UID: "05655848-0e8b-45b6-88c4-009e52d74f51"). InnerVolumeSpecName "kube-api-access-whvwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.113965 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.4271399989999995 podStartE2EDuration="31.11393841s" podCreationTimestamp="2026-02-24 15:11:05 +0000 UTC" firstStartedPulling="2026-02-24 15:11:06.820371519 +0000 UTC m=+1328.439430012" lastFinishedPulling="2026-02-24 15:11:33.50716993 +0000 UTC m=+1355.126228423" observedRunningTime="2026-02-24 15:11:36.10707848 +0000 UTC m=+1357.726136973" watchObservedRunningTime="2026-02-24 15:11:36.11393841 +0000 UTC m=+1357.732996903" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.196910 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whvwv\" (UniqueName: \"kubernetes.io/projected/05655848-0e8b-45b6-88c4-009e52d74f51-kube-api-access-whvwv\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.196938 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.196950 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05655848-0e8b-45b6-88c4-009e52d74f51-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.198254 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gzknf"] Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.208257 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gzknf"] Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.250401 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gsdm4"] Feb 24 15:11:36 crc kubenswrapper[4982]: W0224 15:11:36.342893 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8534c002_8446_4b80_ae93_6d529c59d1df.slice/crio-57c095d0048dbff4500004c0d9b5a234e65bb28e35dd3995adcef70c65fa7621 WatchSource:0}: Error finding container 57c095d0048dbff4500004c0d9b5a234e65bb28e35dd3995adcef70c65fa7621: Status 404 returned error can't find the container with id 57c095d0048dbff4500004c0d9b5a234e65bb28e35dd3995adcef70c65fa7621 Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.455756 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9ddw"] Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.481983 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9ddw"] Feb 24 15:11:36 crc kubenswrapper[4982]: I0224 15:11:36.539803 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 15:11:37 crc kubenswrapper[4982]: I0224 15:11:37.102084 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846fcbcdcb-cdqlq" event={"ID":"113c0a67-b3b4-44c0-b1a4-26e0e9337d63","Type":"ContainerStarted","Data":"de63fbbfb026aade6b5bf45e3df1bfa4da7a6c9e8c2b20d5b4bd73e685e66c5f"} Feb 24 15:11:37 crc kubenswrapper[4982]: I0224 15:11:37.107288 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"92994661-7f5e-4171-9e13-f725269e3475","Type":"ContainerStarted","Data":"4dbe06b6d59b099e2bf04bbe34492a978da081698e9f2475bc7c9cd75a0d8e04"} Feb 24 15:11:37 crc kubenswrapper[4982]: I0224 15:11:37.112091 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gsdm4" event={"ID":"8534c002-8446-4b80-ae93-6d529c59d1df","Type":"ContainerStarted","Data":"57c095d0048dbff4500004c0d9b5a234e65bb28e35dd3995adcef70c65fa7621"} Feb 24 15:11:37 crc kubenswrapper[4982]: I0224 15:11:37.161930 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05655848-0e8b-45b6-88c4-009e52d74f51" path="/var/lib/kubelet/pods/05655848-0e8b-45b6-88c4-009e52d74f51/volumes" Feb 24 15:11:37 crc kubenswrapper[4982]: I0224 15:11:37.162335 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85e7659-e0f8-4093-919d-e33963e6c286" path="/var/lib/kubelet/pods/e85e7659-e0f8-4093-919d-e33963e6c286/volumes" Feb 24 15:11:37 crc kubenswrapper[4982]: I0224 15:11:37.164252 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-846fcbcdcb-cdqlq" podStartSLOduration=28.164226601 podStartE2EDuration="28.164226601s" podCreationTimestamp="2026-02-24 15:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:11:37.124864061 +0000 UTC m=+1358.743922574" watchObservedRunningTime="2026-02-24 15:11:37.164226601 +0000 UTC m=+1358.783285094" Feb 24 15:11:37 crc kubenswrapper[4982]: W0224 15:11:37.366672 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83806fad_ace2_4023_9023_88d534d78650.slice/crio-3ace383c1601d3863c78a266b84c774c6e3da76dd05666d596abd84cc8f0269a WatchSource:0}: Error finding container 3ace383c1601d3863c78a266b84c774c6e3da76dd05666d596abd84cc8f0269a: Status 404 returned error can't find the container with id 3ace383c1601d3863c78a266b84c774c6e3da76dd05666d596abd84cc8f0269a Feb 24 15:11:38 crc kubenswrapper[4982]: I0224 15:11:38.124105 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6","Type":"ContainerStarted","Data":"ca9e4f5ce81b1272724e6bbc4e2d1009e86f074a2f97121639328c4ceafcb76f"} Feb 24 15:11:38 crc kubenswrapper[4982]: I0224 15:11:38.125681 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2e9dd965-1448-4801-9871-b6d949a7e1e7","Type":"ContainerStarted","Data":"53693ad5d6f56121ebd6ecb11a987131e73a84d42c07684896673a0f8d4f91a4"} Feb 24 15:11:38 crc kubenswrapper[4982]: I0224 15:11:38.127202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"83806fad-ace2-4023-9023-88d534d78650","Type":"ContainerStarted","Data":"3ace383c1601d3863c78a266b84c774c6e3da76dd05666d596abd84cc8f0269a"} Feb 24 15:11:38 crc kubenswrapper[4982]: I0224 15:11:38.129626 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4","Type":"ContainerStarted","Data":"eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4"} Feb 24 15:11:39 crc kubenswrapper[4982]: I0224 15:11:39.138649 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"513f6549-901c-4faf-9011-af95fe7398ae","Type":"ContainerStarted","Data":"d6fc08d870d7fad19cfccc0848ad3a2332a787597366253b2f0903bce152b284"} Feb 24 15:11:39 crc kubenswrapper[4982]: I0224 15:11:39.855988 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:39 crc kubenswrapper[4982]: I0224 15:11:39.856051 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:39 crc kubenswrapper[4982]: I0224 15:11:39.866966 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:40 crc kubenswrapper[4982]: I0224 15:11:40.169101 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-846fcbcdcb-cdqlq" Feb 24 15:11:40 crc kubenswrapper[4982]: I0224 15:11:40.236752 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-544ccdb57f-qqrjk"] Feb 24 15:11:41 crc kubenswrapper[4982]: I0224 15:11:41.167258 4982 generic.go:334] "Generic (PLEG): container finished" podID="92994661-7f5e-4171-9e13-f725269e3475" containerID="4dbe06b6d59b099e2bf04bbe34492a978da081698e9f2475bc7c9cd75a0d8e04" exitCode=0 Feb 24 15:11:41 crc kubenswrapper[4982]: I0224 15:11:41.167367 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"92994661-7f5e-4171-9e13-f725269e3475","Type":"ContainerDied","Data":"4dbe06b6d59b099e2bf04bbe34492a978da081698e9f2475bc7c9cd75a0d8e04"} Feb 24 15:11:43 crc kubenswrapper[4982]: I0224 15:11:43.211197 4982 generic.go:334] "Generic (PLEG): container finished" podID="2e9dd965-1448-4801-9871-b6d949a7e1e7" containerID="53693ad5d6f56121ebd6ecb11a987131e73a84d42c07684896673a0f8d4f91a4" exitCode=0 Feb 24 15:11:43 crc kubenswrapper[4982]: I0224 15:11:43.211418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2e9dd965-1448-4801-9871-b6d949a7e1e7","Type":"ContainerDied","Data":"53693ad5d6f56121ebd6ecb11a987131e73a84d42c07684896673a0f8d4f91a4"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.224415 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"92994661-7f5e-4171-9e13-f725269e3475","Type":"ContainerStarted","Data":"1a825604a507f06f59ed0f9f3e3b6ac541ef2ad927de6604290927b9a5e7fdd4"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.226227 4982 generic.go:334] "Generic (PLEG): container finished" podID="8534c002-8446-4b80-ae93-6d529c59d1df" containerID="5fdd98e05e50c7f98de4e8bbed9b78c951faeb11720d52ecdde2eb12f3058597" exitCode=0 Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.226319 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gsdm4" event={"ID":"8534c002-8446-4b80-ae93-6d529c59d1df","Type":"ContainerDied","Data":"5fdd98e05e50c7f98de4e8bbed9b78c951faeb11720d52ecdde2eb12f3058597"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.227717 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" event={"ID":"10e410a1-e886-451e-9cfc-40f6812a4d0d","Type":"ContainerStarted","Data":"5d42b1e87ae17f143445307e68b1d58e92d6f917ff00f226c77ab2500d18be59"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.231658 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2e9dd965-1448-4801-9871-b6d949a7e1e7","Type":"ContainerStarted","Data":"0848ab7db38443fb88113142fbe8e53ad884c5d4f001199d6189e3393507a6a2"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.235193 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"83806fad-ace2-4023-9023-88d534d78650","Type":"ContainerStarted","Data":"d2e2b92d81f87c20c8a681ed8b609546a6427776f8b9be86549211dbf2acbfe8"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.240370 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3cbd28d0-4f27-43d5-86cb-8fbd471d4098","Type":"ContainerStarted","Data":"c911608bbfccdce91ead82c6e6edd612759035ffffe737f7d1b955afcbd4bdcd"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.240556 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.242859 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"43d8feff-fc48-4fcc-86f8-ce96094eded1","Type":"ContainerStarted","Data":"96c0f184a796534bae2c730fb646ded3fc43b5cd4e50f029780a3fcdd9376125"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.249016 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.368411766 podStartE2EDuration="40.249000239s" podCreationTimestamp="2026-02-24 15:11:04 +0000 UTC" firstStartedPulling="2026-02-24 15:11:06.626729681 +0000 UTC m=+1328.245788174" lastFinishedPulling="2026-02-24 15:11:33.507318134 +0000 UTC m=+1355.126376647" observedRunningTime="2026-02-24 15:11:44.245248636 +0000 UTC m=+1365.864307129" watchObservedRunningTime="2026-02-24 15:11:44.249000239 +0000 UTC m=+1365.868058732" Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.249349 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xcvbd" event={"ID":"2c21939f-82d2-4553-acbd-b570e4d1527c","Type":"ContainerStarted","Data":"0793fcac822bdbcbbca7f6953bd4f737e96729cc5b11f7cf2a601ebb1e882b77"} Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.250204 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xcvbd" Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.279323 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.359914934 podStartE2EDuration="42.279297183s" podCreationTimestamp="2026-02-24 15:11:02 +0000 UTC" firstStartedPulling="2026-02-24 15:11:04.703188314 +0000 UTC m=+1326.322246807" lastFinishedPulling="2026-02-24 15:11:35.622570563 +0000 UTC m=+1357.241629056" observedRunningTime="2026-02-24 15:11:44.275587162 +0000 UTC m=+1365.894645675" watchObservedRunningTime="2026-02-24 15:11:44.279297183 +0000 UTC m=+1365.898355706" Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.315615 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.825273085 podStartE2EDuration="36.315587769s" podCreationTimestamp="2026-02-24 15:11:08 +0000 UTC" firstStartedPulling="2026-02-24 15:11:21.64748453 +0000 UTC m=+1343.266543023" lastFinishedPulling="2026-02-24 15:11:43.137799204 +0000 UTC m=+1364.756857707" observedRunningTime="2026-02-24 15:11:44.298684069 +0000 UTC m=+1365.917742562" watchObservedRunningTime="2026-02-24 15:11:44.315587769 +0000 UTC m=+1365.934646272" Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.373289 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m6fxj" podStartSLOduration=29.10120482 podStartE2EDuration="36.373269828s" podCreationTimestamp="2026-02-24 15:11:08 +0000 UTC" firstStartedPulling="2026-02-24 15:11:35.888052083 +0000 UTC m=+1357.507110566" lastFinishedPulling="2026-02-24 15:11:43.160117061 +0000 UTC m=+1364.779175574" observedRunningTime="2026-02-24 15:11:44.3435778 +0000 UTC m=+1365.962636303" watchObservedRunningTime="2026-02-24 15:11:44.373269828 +0000 UTC m=+1365.992328321" Feb 24 15:11:44 crc kubenswrapper[4982]: I0224 15:11:44.385978 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xcvbd" podStartSLOduration=26.861009691 podStartE2EDuration="34.385961363s" podCreationTimestamp="2026-02-24 15:11:10 +0000 UTC" firstStartedPulling="2026-02-24 15:11:35.598381309 +0000 UTC m=+1357.217439792" lastFinishedPulling="2026-02-24 15:11:43.123332971 +0000 UTC m=+1364.742391464" observedRunningTime="2026-02-24 15:11:44.367000537 +0000 UTC m=+1365.986059040" watchObservedRunningTime="2026-02-24 15:11:44.385961363 +0000 UTC m=+1366.005019856" Feb 24 15:11:45 crc kubenswrapper[4982]: I0224 15:11:45.265748 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"511c8aa0-4327-455c-8caa-66bc442d199f","Type":"ContainerStarted","Data":"9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179"} Feb 24 15:11:45 crc kubenswrapper[4982]: I0224 15:11:45.270462 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gsdm4" event={"ID":"8534c002-8446-4b80-ae93-6d529c59d1df","Type":"ContainerStarted","Data":"57ef701608ad0a16d03a17e0a3a574cceab2ea074c31c9d872bd4fc741449586"} Feb 24 15:11:45 crc kubenswrapper[4982]: I0224 15:11:45.604955 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:45 crc kubenswrapper[4982]: I0224 15:11:45.605387 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 15:11:45 crc kubenswrapper[4982]: I0224 15:11:45.932822 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 24 15:11:46 crc kubenswrapper[4982]: I0224 15:11:46.282193 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"43d8feff-fc48-4fcc-86f8-ce96094eded1","Type":"ContainerStarted","Data":"4e1f50bc98a5bc4f414d10564624b648959d07a005147da0ad393c33e1caac35"} Feb 24 15:11:47 crc kubenswrapper[4982]: I0224 15:11:47.295390 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"83806fad-ace2-4023-9023-88d534d78650","Type":"ContainerStarted","Data":"81b1222b80a8e1a6df8ec10b7ccbcfc36069ee69d7d4e83faad812ed38ef05fd"} Feb 24 15:11:47 crc kubenswrapper[4982]: I0224 15:11:47.298004 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gsdm4" event={"ID":"8534c002-8446-4b80-ae93-6d529c59d1df","Type":"ContainerStarted","Data":"0280a4dd30b68f1f38ef1922742ae74b8e9762e0d781ad27986c26da9de04d5b"} Feb 24 15:11:47 crc kubenswrapper[4982]: I0224 15:11:47.336777 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.837302724 podStartE2EDuration="33.336754685s" podCreationTimestamp="2026-02-24 15:11:14 +0000 UTC" firstStartedPulling="2026-02-24 15:11:36.018338316 +0000 UTC m=+1357.637396819" lastFinishedPulling="2026-02-24 15:11:45.517790287 +0000 UTC m=+1367.136848780" observedRunningTime="2026-02-24 15:11:47.329238131 +0000 UTC m=+1368.948296634" watchObservedRunningTime="2026-02-24 15:11:47.336754685 +0000 UTC m=+1368.955813178" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.435259 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.516477 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vvxd8"] Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.564558 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-wkkzz"] Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.568409 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.591130 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-wkkzz"] Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.607777 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.671363 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.686698 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.686799 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-config\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.686907 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcdvl\" (UniqueName: \"kubernetes.io/projected/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-kube-api-access-bcdvl\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.788420 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcdvl\" (UniqueName: \"kubernetes.io/projected/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-kube-api-access-bcdvl\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.788539 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.788585 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-config\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.789405 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-config\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.789482 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.811180 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcdvl\" (UniqueName: \"kubernetes.io/projected/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-kube-api-access-bcdvl\") pod \"dnsmasq-dns-7cb5889db5-wkkzz\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:48 crc kubenswrapper[4982]: I0224 15:11:48.893380 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.334340 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.440274 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-wkkzz"] Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.441392 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.782169 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghtrh"] Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.814184 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-sn7zc"] Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.816543 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.819225 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.830666 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-sn7zc"] Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.889474 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.895346 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.897954 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.913141 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.913219 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.913445 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ks9qk" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.914052 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.914146 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf48g\" (UniqueName: \"kubernetes.io/projected/75ea04fc-ab4f-446c-a3f2-2e82f387068b-kube-api-access-pf48g\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.914187 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-config\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.914228 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.928641 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.943939 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xf5bb"] Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.946252 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.953908 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 24 15:11:49 crc kubenswrapper[4982]: I0224 15:11:49.973164 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xf5bb"] Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.021637 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf48g\" (UniqueName: \"kubernetes.io/projected/75ea04fc-ab4f-446c-a3f2-2e82f387068b-kube-api-access-pf48g\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.021717 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-config\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.021770 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.021808 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4f31cfa2-205d-4fea-8b18-d88cf906bc94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f31cfa2-205d-4fea-8b18-d88cf906bc94\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.021973 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6daa16f-c9d9-465a-8d00-711f5ef84326-cache\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.022070 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvc8d\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-kube-api-access-wvc8d\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.022142 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.022186 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6daa16f-c9d9-465a-8d00-711f5ef84326-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.022220 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b6daa16f-c9d9-465a-8d00-711f5ef84326-lock\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.022249 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.023854 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-config\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.026838 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.028309 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.073992 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf48g\" (UniqueName: \"kubernetes.io/projected/75ea04fc-ab4f-446c-a3f2-2e82f387068b-kube-api-access-pf48g\") pod \"dnsmasq-dns-8cc7fc4dc-sn7zc\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.123847 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvc8d\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-kube-api-access-wvc8d\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.124151 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6846e57a-0d17-4fd1-b470-5923690ad622-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.124280 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6846e57a-0d17-4fd1-b470-5923690ad622-config\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.124373 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6daa16f-c9d9-465a-8d00-711f5ef84326-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.124457 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b6daa16f-c9d9-465a-8d00-711f5ef84326-lock\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.124546 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.124716 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn5q7\" (UniqueName: \"kubernetes.io/projected/6846e57a-0d17-4fd1-b470-5923690ad622-kube-api-access-nn5q7\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.124821 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6846e57a-0d17-4fd1-b470-5923690ad622-ovn-rundir\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.124921 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4f31cfa2-205d-4fea-8b18-d88cf906bc94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f31cfa2-205d-4fea-8b18-d88cf906bc94\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.125064 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6846e57a-0d17-4fd1-b470-5923690ad622-combined-ca-bundle\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.125199 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b6daa16f-c9d9-465a-8d00-711f5ef84326-lock\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.125374 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6daa16f-c9d9-465a-8d00-711f5ef84326-cache\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.125418 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6846e57a-0d17-4fd1-b470-5923690ad622-ovs-rundir\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: E0224 15:11:50.125435 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 15:11:50 crc kubenswrapper[4982]: E0224 15:11:50.125469 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 15:11:50 crc kubenswrapper[4982]: E0224 15:11:50.125625 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift podName:b6daa16f-c9d9-465a-8d00-711f5ef84326 nodeName:}" failed. No retries permitted until 2026-02-24 15:11:50.625579435 +0000 UTC m=+1372.244637928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift") pod "swift-storage-0" (UID: "b6daa16f-c9d9-465a-8d00-711f5ef84326") : configmap "swift-ring-files" not found Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.126101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b6daa16f-c9d9-465a-8d00-711f5ef84326-cache\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.130215 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6daa16f-c9d9-465a-8d00-711f5ef84326-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.130409 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.130438 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4f31cfa2-205d-4fea-8b18-d88cf906bc94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f31cfa2-205d-4fea-8b18-d88cf906bc94\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03fdc1ab360ff2fe06776656263ab191fe271fbbebcabe85b6874b2b6166ad79/globalmount\"" pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.150488 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.156307 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvc8d\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-kube-api-access-wvc8d\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.211172 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cwcwd"] Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.213277 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.225376 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.225622 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.228784 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.229889 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6846e57a-0d17-4fd1-b470-5923690ad622-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.235778 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6846e57a-0d17-4fd1-b470-5923690ad622-config\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.235982 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn5q7\" (UniqueName: \"kubernetes.io/projected/6846e57a-0d17-4fd1-b470-5923690ad622-kube-api-access-nn5q7\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.236214 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6846e57a-0d17-4fd1-b470-5923690ad622-ovn-rundir\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.236488 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6846e57a-0d17-4fd1-b470-5923690ad622-combined-ca-bundle\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.236670 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6846e57a-0d17-4fd1-b470-5923690ad622-ovs-rundir\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.236807 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6846e57a-0d17-4fd1-b470-5923690ad622-config\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.237823 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6846e57a-0d17-4fd1-b470-5923690ad622-ovn-rundir\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.239337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6846e57a-0d17-4fd1-b470-5923690ad622-ovs-rundir\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.241773 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6846e57a-0d17-4fd1-b470-5923690ad622-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.254871 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cwcwd"] Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.256375 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6846e57a-0d17-4fd1-b470-5923690ad622-combined-ca-bundle\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.268478 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn5q7\" (UniqueName: \"kubernetes.io/projected/6846e57a-0d17-4fd1-b470-5923690ad622-kube-api-access-nn5q7\") pod \"ovn-controller-metrics-xf5bb\" (UID: \"6846e57a-0d17-4fd1-b470-5923690ad622\") " pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.278698 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xf5bb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.291314 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4f31cfa2-205d-4fea-8b18-d88cf906bc94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4f31cfa2-205d-4fea-8b18-d88cf906bc94\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.334329 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-wkkzz"] Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.340654 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-swiftconf\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.340695 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-ring-data-devices\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.340722 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-combined-ca-bundle\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.340834 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-scripts\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.340859 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-dispersionconf\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.340883 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4ldp\" (UniqueName: \"kubernetes.io/projected/e5321445-9e2b-44c7-9975-2bfe929ead53-kube-api-access-j4ldp\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.340901 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5321445-9e2b-44c7-9975-2bfe929ead53-etc-swift\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.372657 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jjgq9"] Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.374695 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.379397 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.410175 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jjgq9"] Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.416220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" event={"ID":"650ddaaa-9b21-4a69-bbf7-d563a40d7c61","Type":"ContainerStarted","Data":"ae00f01e73c43e61115529378c14b6debaa6d04a2e3a93db861eeaa73b032f82"} Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.444899 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-scripts\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.444963 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-dispersionconf\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.444997 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4ldp\" (UniqueName: \"kubernetes.io/projected/e5321445-9e2b-44c7-9975-2bfe929ead53-kube-api-access-j4ldp\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.445029 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5321445-9e2b-44c7-9975-2bfe929ead53-etc-swift\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.445111 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-swiftconf\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.445157 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-ring-data-devices\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.445197 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-combined-ca-bundle\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.446200 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-scripts\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.448810 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5321445-9e2b-44c7-9975-2bfe929ead53-etc-swift\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.451065 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-ring-data-devices\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.455037 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-dispersionconf\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.461716 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-swiftconf\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.468769 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-combined-ca-bundle\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.485178 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4ldp\" (UniqueName: \"kubernetes.io/projected/e5321445-9e2b-44c7-9975-2bfe929ead53-kube-api-access-j4ldp\") pod \"swift-ring-rebalance-cwcwd\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.547419 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.547467 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.547564 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcrc\" (UniqueName: \"kubernetes.io/projected/612828ed-0dec-4a50-b9af-2e9e63864167-kube-api-access-gtcrc\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.547595 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.547658 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-config\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.632385 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.649147 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.649196 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.649235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.649278 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcrc\" (UniqueName: \"kubernetes.io/projected/612828ed-0dec-4a50-b9af-2e9e63864167-kube-api-access-gtcrc\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.649307 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.649341 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-config\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.650126 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-config\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.651186 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: E0224 15:11:50.651306 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 15:11:50 crc kubenswrapper[4982]: E0224 15:11:50.651359 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 15:11:50 crc kubenswrapper[4982]: E0224 15:11:50.651443 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift podName:b6daa16f-c9d9-465a-8d00-711f5ef84326 nodeName:}" failed. No retries permitted until 2026-02-24 15:11:51.651403203 +0000 UTC m=+1373.270461766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift") pod "swift-storage-0" (UID: "b6daa16f-c9d9-465a-8d00-711f5ef84326") : configmap "swift-ring-files" not found Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.651471 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.652023 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.673454 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcrc\" (UniqueName: \"kubernetes.io/projected/612828ed-0dec-4a50-b9af-2e9e63864167-kube-api-access-gtcrc\") pod \"dnsmasq-dns-b8fbc5445-jjgq9\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.699110 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.825162 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-sn7zc"] Feb 24 15:11:50 crc kubenswrapper[4982]: W0224 15:11:50.941935 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6846e57a_0d17_4fd1_b470_5923690ad622.slice/crio-0e8d4986519fb7cff9299f94d10256b4a021d310197e46605a8b8ccd716acabb WatchSource:0}: Error finding container 0e8d4986519fb7cff9299f94d10256b4a021d310197e46605a8b8ccd716acabb: Status 404 returned error can't find the container with id 0e8d4986519fb7cff9299f94d10256b4a021d310197e46605a8b8ccd716acabb Feb 24 15:11:50 crc kubenswrapper[4982]: I0224 15:11:50.948425 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xf5bb"] Feb 24 15:11:51 crc kubenswrapper[4982]: I0224 15:11:51.135550 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cwcwd"] Feb 24 15:11:51 crc kubenswrapper[4982]: W0224 15:11:51.137633 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5321445_9e2b_44c7_9975_2bfe929ead53.slice/crio-2a780370c69889f9e433676fd55ea6c5a02ed9d385306a4ce4ce6d37d32bd641 WatchSource:0}: Error finding container 2a780370c69889f9e433676fd55ea6c5a02ed9d385306a4ce4ce6d37d32bd641: Status 404 returned error can't find the container with id 2a780370c69889f9e433676fd55ea6c5a02ed9d385306a4ce4ce6d37d32bd641 Feb 24 15:11:51 crc kubenswrapper[4982]: I0224 15:11:51.282301 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jjgq9"] Feb 24 15:11:51 crc kubenswrapper[4982]: I0224 15:11:51.426791 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" event={"ID":"612828ed-0dec-4a50-b9af-2e9e63864167","Type":"ContainerStarted","Data":"95a8750796ab31c0a773a490f7ce2d9dc85b8214877f375d1609985a756bb4c6"} Feb 24 15:11:51 crc kubenswrapper[4982]: I0224 15:11:51.428107 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwcwd" event={"ID":"e5321445-9e2b-44c7-9975-2bfe929ead53","Type":"ContainerStarted","Data":"2a780370c69889f9e433676fd55ea6c5a02ed9d385306a4ce4ce6d37d32bd641"} Feb 24 15:11:51 crc kubenswrapper[4982]: I0224 15:11:51.429362 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" event={"ID":"75ea04fc-ab4f-446c-a3f2-2e82f387068b","Type":"ContainerStarted","Data":"683704ac750c6a8abd01d8aa042c58a406fd1dca113f6ff6230dc4a06261ec32"} Feb 24 15:11:51 crc kubenswrapper[4982]: I0224 15:11:51.430424 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xf5bb" event={"ID":"6846e57a-0d17-4fd1-b470-5923690ad622","Type":"ContainerStarted","Data":"0e8d4986519fb7cff9299f94d10256b4a021d310197e46605a8b8ccd716acabb"} Feb 24 15:11:51 crc kubenswrapper[4982]: I0224 15:11:51.675208 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:51 crc kubenswrapper[4982]: E0224 15:11:51.675459 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 15:11:51 crc kubenswrapper[4982]: E0224 15:11:51.675706 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 15:11:51 crc kubenswrapper[4982]: E0224 15:11:51.675810 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift podName:b6daa16f-c9d9-465a-8d00-711f5ef84326 nodeName:}" failed. No retries permitted until 2026-02-24 15:11:53.675754914 +0000 UTC m=+1375.294813417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift") pod "swift-storage-0" (UID: "b6daa16f-c9d9-465a-8d00-711f5ef84326") : configmap "swift-ring-files" not found Feb 24 15:11:52 crc kubenswrapper[4982]: I0224 15:11:52.438279 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:52 crc kubenswrapper[4982]: I0224 15:11:52.438448 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:11:52 crc kubenswrapper[4982]: I0224 15:11:52.462145 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gsdm4" podStartSLOduration=35.681282858 podStartE2EDuration="42.462125236s" podCreationTimestamp="2026-02-24 15:11:10 +0000 UTC" firstStartedPulling="2026-02-24 15:11:36.357288416 +0000 UTC m=+1357.976346909" lastFinishedPulling="2026-02-24 15:11:43.138130794 +0000 UTC m=+1364.757189287" observedRunningTime="2026-02-24 15:11:52.459493065 +0000 UTC m=+1374.078551558" watchObservedRunningTime="2026-02-24 15:11:52.462125236 +0000 UTC m=+1374.081183729" Feb 24 15:11:52 crc kubenswrapper[4982]: I0224 15:11:52.488936 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=34.324508112 podStartE2EDuration="42.488913925s" podCreationTimestamp="2026-02-24 15:11:10 +0000 UTC" firstStartedPulling="2026-02-24 15:11:37.369232475 +0000 UTC m=+1358.988290968" lastFinishedPulling="2026-02-24 15:11:45.533638288 +0000 UTC m=+1367.152696781" observedRunningTime="2026-02-24 15:11:52.475104519 +0000 UTC m=+1374.094163012" watchObservedRunningTime="2026-02-24 15:11:52.488913925 +0000 UTC m=+1374.107972408" Feb 24 15:11:53 crc kubenswrapper[4982]: I0224 15:11:53.448365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerStarted","Data":"3cba6fe01c06b31027a2c8b1fb1adcd803560abb67f4eb8d66faf7d1c6d63cd8"} Feb 24 15:11:53 crc kubenswrapper[4982]: I0224 15:11:53.450796 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xf5bb" event={"ID":"6846e57a-0d17-4fd1-b470-5923690ad622","Type":"ContainerStarted","Data":"d48f184cb0a1a9eb261e040d617fd0b5a0104c33532d77ee1202919058374c47"} Feb 24 15:11:53 crc kubenswrapper[4982]: I0224 15:11:53.494427 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xf5bb" podStartSLOduration=4.494408934 podStartE2EDuration="4.494408934s" podCreationTimestamp="2026-02-24 15:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:11:53.487756843 +0000 UTC m=+1375.106815346" watchObservedRunningTime="2026-02-24 15:11:53.494408934 +0000 UTC m=+1375.113467427" Feb 24 15:11:53 crc kubenswrapper[4982]: I0224 15:11:53.722976 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:53 crc kubenswrapper[4982]: E0224 15:11:53.723188 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 15:11:53 crc kubenswrapper[4982]: E0224 15:11:53.723388 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 15:11:53 crc kubenswrapper[4982]: E0224 15:11:53.723437 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift podName:b6daa16f-c9d9-465a-8d00-711f5ef84326 nodeName:}" failed. No retries permitted until 2026-02-24 15:11:57.723423672 +0000 UTC m=+1379.342482165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift") pod "swift-storage-0" (UID: "b6daa16f-c9d9-465a-8d00-711f5ef84326") : configmap "swift-ring-files" not found Feb 24 15:11:54 crc kubenswrapper[4982]: E0224 15:11:54.026317 4982 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:58334->38.102.83.50:38677: write tcp 38.102.83.50:58334->38.102.83.50:38677: write: broken pipe Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.026559 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.026624 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.266806 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.323102 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.470145 4982 generic.go:334] "Generic (PLEG): container finished" podID="650ddaaa-9b21-4a69-bbf7-d563a40d7c61" containerID="d1eca7ea2ee6ce98e7f290fbfc2011af522870dea80dda247b9cefedcdff9c11" exitCode=0 Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.470202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" event={"ID":"650ddaaa-9b21-4a69-bbf7-d563a40d7c61","Type":"ContainerDied","Data":"d1eca7ea2ee6ce98e7f290fbfc2011af522870dea80dda247b9cefedcdff9c11"} Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.483529 4982 generic.go:334] "Generic (PLEG): container finished" podID="bea504cd-94bd-4526-a1c8-e26fc2bcb918" containerID="02019b8d007b4d8817ba99b2d2f05b0494ef2bef795e050b92d5ace2e3db581e" exitCode=0 Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.483593 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" event={"ID":"bea504cd-94bd-4526-a1c8-e26fc2bcb918","Type":"ContainerDied","Data":"02019b8d007b4d8817ba99b2d2f05b0494ef2bef795e050b92d5ace2e3db581e"} Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.504039 4982 generic.go:334] "Generic (PLEG): container finished" podID="612828ed-0dec-4a50-b9af-2e9e63864167" containerID="d2ee803e090636e5d60646ffcebd0b4e2ad9c37cb50b19a6f504cd6974215a85" exitCode=0 Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.504417 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" event={"ID":"612828ed-0dec-4a50-b9af-2e9e63864167","Type":"ContainerDied","Data":"d2ee803e090636e5d60646ffcebd0b4e2ad9c37cb50b19a6f504cd6974215a85"} Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.545230 4982 generic.go:334] "Generic (PLEG): container finished" podID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" containerID="b82c495e97d79754ad8216c617b6a9723ed1c120407cd3206a9d9e7722ae4e4d" exitCode=0 Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.545375 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" event={"ID":"75ea04fc-ab4f-446c-a3f2-2e82f387068b","Type":"ContainerDied","Data":"b82c495e97d79754ad8216c617b6a9723ed1c120407cd3206a9d9e7722ae4e4d"} Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.549816 4982 generic.go:334] "Generic (PLEG): container finished" podID="1d7115b8-e3e3-42b8-bf78-964d9b28ff90" containerID="8f389fcea7c9dbdb1df5668a7ddf0fb140731ae66d1a5c3182c502fae0a30520" exitCode=0 Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.549852 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" event={"ID":"1d7115b8-e3e3-42b8-bf78-964d9b28ff90","Type":"ContainerDied","Data":"8f389fcea7c9dbdb1df5668a7ddf0fb140731ae66d1a5c3182c502fae0a30520"} Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.550984 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.655394 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.888195 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.889972 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.892514 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.892918 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.895800 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.902836 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rpzzf" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.907363 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.945763 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.952991 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86cf4035-92fa-4d4d-9ce7-ca961da212c2-scripts\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.953060 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86cf4035-92fa-4d4d-9ce7-ca961da212c2-config\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.953220 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/86cf4035-92fa-4d4d-9ce7-ca961da212c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.953251 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.953279 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h7xt\" (UniqueName: \"kubernetes.io/projected/86cf4035-92fa-4d4d-9ce7-ca961da212c2-kube-api-access-4h7xt\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.953311 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:54 crc kubenswrapper[4982]: I0224 15:11:54.953351 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.061803 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86cf4035-92fa-4d4d-9ce7-ca961da212c2-scripts\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.062087 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86cf4035-92fa-4d4d-9ce7-ca961da212c2-config\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.062752 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86cf4035-92fa-4d4d-9ce7-ca961da212c2-config\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.062756 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86cf4035-92fa-4d4d-9ce7-ca961da212c2-scripts\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.064258 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/86cf4035-92fa-4d4d-9ce7-ca961da212c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.064326 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.064356 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h7xt\" (UniqueName: \"kubernetes.io/projected/86cf4035-92fa-4d4d-9ce7-ca961da212c2-kube-api-access-4h7xt\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.064419 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.064490 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.066455 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/86cf4035-92fa-4d4d-9ce7-ca961da212c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.078975 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.085381 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h7xt\" (UniqueName: \"kubernetes.io/projected/86cf4035-92fa-4d4d-9ce7-ca961da212c2-kube-api-access-4h7xt\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.087095 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.096174 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.100309 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cf4035-92fa-4d4d-9ce7-ca961da212c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"86cf4035-92fa-4d4d-9ce7-ca961da212c2\") " pod="openstack/ovn-northd-0" Feb 24 15:11:55 crc kubenswrapper[4982]: I0224 15:11:55.240754 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 15:11:56 crc kubenswrapper[4982]: I0224 15:11:56.908246 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-04b1-account-create-update-pknzw"] Feb 24 15:11:56 crc kubenswrapper[4982]: I0224 15:11:56.910113 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:11:56 crc kubenswrapper[4982]: I0224 15:11:56.916799 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 24 15:11:56 crc kubenswrapper[4982]: I0224 15:11:56.929398 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9k6\" (UniqueName: \"kubernetes.io/projected/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-kube-api-access-qr9k6\") pod \"keystone-04b1-account-create-update-pknzw\" (UID: \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\") " pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:11:56 crc kubenswrapper[4982]: I0224 15:11:56.930076 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-operator-scripts\") pod \"keystone-04b1-account-create-update-pknzw\" (UID: \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\") " pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:11:56 crc kubenswrapper[4982]: I0224 15:11:56.934950 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-04b1-account-create-update-pknzw"] Feb 24 15:11:56 crc kubenswrapper[4982]: I0224 15:11:56.977418 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8fv24"] Feb 24 15:11:56 crc kubenswrapper[4982]: I0224 15:11:56.998409 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fv24" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.000716 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8fv24"] Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.036590 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d043fa3e-3790-470f-8b7d-743969cede03-operator-scripts\") pod \"keystone-db-create-8fv24\" (UID: \"d043fa3e-3790-470f-8b7d-743969cede03\") " pod="openstack/keystone-db-create-8fv24" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.036847 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-operator-scripts\") pod \"keystone-04b1-account-create-update-pknzw\" (UID: \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\") " pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.036967 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsrsh\" (UniqueName: \"kubernetes.io/projected/d043fa3e-3790-470f-8b7d-743969cede03-kube-api-access-tsrsh\") pod \"keystone-db-create-8fv24\" (UID: \"d043fa3e-3790-470f-8b7d-743969cede03\") " pod="openstack/keystone-db-create-8fv24" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.037070 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9k6\" (UniqueName: \"kubernetes.io/projected/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-kube-api-access-qr9k6\") pod \"keystone-04b1-account-create-update-pknzw\" (UID: \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\") " pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.038393 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-operator-scripts\") pod \"keystone-04b1-account-create-update-pknzw\" (UID: \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\") " pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.055323 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr9k6\" (UniqueName: \"kubernetes.io/projected/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-kube-api-access-qr9k6\") pod \"keystone-04b1-account-create-update-pknzw\" (UID: \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\") " pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.104597 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pbbft"] Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.105813 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pbbft" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.111042 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pbbft"] Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.138631 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d043fa3e-3790-470f-8b7d-743969cede03-operator-scripts\") pod \"keystone-db-create-8fv24\" (UID: \"d043fa3e-3790-470f-8b7d-743969cede03\") " pod="openstack/keystone-db-create-8fv24" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.138765 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsrsh\" (UniqueName: \"kubernetes.io/projected/d043fa3e-3790-470f-8b7d-743969cede03-kube-api-access-tsrsh\") pod \"keystone-db-create-8fv24\" (UID: \"d043fa3e-3790-470f-8b7d-743969cede03\") " pod="openstack/keystone-db-create-8fv24" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.139751 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d043fa3e-3790-470f-8b7d-743969cede03-operator-scripts\") pod \"keystone-db-create-8fv24\" (UID: \"d043fa3e-3790-470f-8b7d-743969cede03\") " pod="openstack/keystone-db-create-8fv24" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.165119 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsrsh\" (UniqueName: \"kubernetes.io/projected/d043fa3e-3790-470f-8b7d-743969cede03-kube-api-access-tsrsh\") pod \"keystone-db-create-8fv24\" (UID: \"d043fa3e-3790-470f-8b7d-743969cede03\") " pod="openstack/keystone-db-create-8fv24" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.216424 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b0b6-account-create-update-zvgjp"] Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.219806 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.223624 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.229194 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b0b6-account-create-update-zvgjp"] Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.240030 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.240970 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd310a49-0087-4874-82ba-63f6d002bf18-operator-scripts\") pod \"placement-db-create-pbbft\" (UID: \"fd310a49-0087-4874-82ba-63f6d002bf18\") " pod="openstack/placement-db-create-pbbft" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.241962 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rwp\" (UniqueName: \"kubernetes.io/projected/fd310a49-0087-4874-82ba-63f6d002bf18-kube-api-access-x7rwp\") pod \"placement-db-create-pbbft\" (UID: \"fd310a49-0087-4874-82ba-63f6d002bf18\") " pod="openstack/placement-db-create-pbbft" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.332858 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fv24" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.344162 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd310a49-0087-4874-82ba-63f6d002bf18-operator-scripts\") pod \"placement-db-create-pbbft\" (UID: \"fd310a49-0087-4874-82ba-63f6d002bf18\") " pod="openstack/placement-db-create-pbbft" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.344289 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99273cd9-0c56-4024-9df0-e8a8294bb346-operator-scripts\") pod \"placement-b0b6-account-create-update-zvgjp\" (UID: \"99273cd9-0c56-4024-9df0-e8a8294bb346\") " pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.344345 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5s26\" (UniqueName: \"kubernetes.io/projected/99273cd9-0c56-4024-9df0-e8a8294bb346-kube-api-access-p5s26\") pod \"placement-b0b6-account-create-update-zvgjp\" (UID: \"99273cd9-0c56-4024-9df0-e8a8294bb346\") " pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.344457 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rwp\" (UniqueName: \"kubernetes.io/projected/fd310a49-0087-4874-82ba-63f6d002bf18-kube-api-access-x7rwp\") pod \"placement-db-create-pbbft\" (UID: \"fd310a49-0087-4874-82ba-63f6d002bf18\") " pod="openstack/placement-db-create-pbbft" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.349054 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd310a49-0087-4874-82ba-63f6d002bf18-operator-scripts\") pod \"placement-db-create-pbbft\" (UID: \"fd310a49-0087-4874-82ba-63f6d002bf18\") " pod="openstack/placement-db-create-pbbft" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.372350 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rwp\" (UniqueName: \"kubernetes.io/projected/fd310a49-0087-4874-82ba-63f6d002bf18-kube-api-access-x7rwp\") pod \"placement-db-create-pbbft\" (UID: \"fd310a49-0087-4874-82ba-63f6d002bf18\") " pod="openstack/placement-db-create-pbbft" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.446622 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5s26\" (UniqueName: \"kubernetes.io/projected/99273cd9-0c56-4024-9df0-e8a8294bb346-kube-api-access-p5s26\") pod \"placement-b0b6-account-create-update-zvgjp\" (UID: \"99273cd9-0c56-4024-9df0-e8a8294bb346\") " pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.446866 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99273cd9-0c56-4024-9df0-e8a8294bb346-operator-scripts\") pod \"placement-b0b6-account-create-update-zvgjp\" (UID: \"99273cd9-0c56-4024-9df0-e8a8294bb346\") " pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.447561 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99273cd9-0c56-4024-9df0-e8a8294bb346-operator-scripts\") pod \"placement-b0b6-account-create-update-zvgjp\" (UID: \"99273cd9-0c56-4024-9df0-e8a8294bb346\") " pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.462756 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5s26\" (UniqueName: \"kubernetes.io/projected/99273cd9-0c56-4024-9df0-e8a8294bb346-kube-api-access-p5s26\") pod \"placement-b0b6-account-create-update-zvgjp\" (UID: \"99273cd9-0c56-4024-9df0-e8a8294bb346\") " pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.525160 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pbbft" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.543109 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:11:57 crc kubenswrapper[4982]: I0224 15:11:57.754914 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:11:57 crc kubenswrapper[4982]: E0224 15:11:57.755164 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 15:11:57 crc kubenswrapper[4982]: E0224 15:11:57.755207 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 15:11:57 crc kubenswrapper[4982]: E0224 15:11:57.755282 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift podName:b6daa16f-c9d9-465a-8d00-711f5ef84326 nodeName:}" failed. No retries permitted until 2026-02-24 15:12:05.755260428 +0000 UTC m=+1387.374318941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift") pod "swift-storage-0" (UID: "b6daa16f-c9d9-465a-8d00-711f5ef84326") : configmap "swift-ring-files" not found Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.025082 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.033025 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.047490 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168324 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-dns-svc\") pod \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168471 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-dns-svc\") pod \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168511 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-config\") pod \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168609 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-dns-svc\") pod \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168652 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fmff\" (UniqueName: \"kubernetes.io/projected/bea504cd-94bd-4526-a1c8-e26fc2bcb918-kube-api-access-5fmff\") pod \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168678 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-config\") pod \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\" (UID: \"bea504cd-94bd-4526-a1c8-e26fc2bcb918\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168702 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-config\") pod \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168784 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2hx8\" (UniqueName: \"kubernetes.io/projected/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-kube-api-access-w2hx8\") pod \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\" (UID: \"1d7115b8-e3e3-42b8-bf78-964d9b28ff90\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.168805 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcdvl\" (UniqueName: \"kubernetes.io/projected/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-kube-api-access-bcdvl\") pod \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\" (UID: \"650ddaaa-9b21-4a69-bbf7-d563a40d7c61\") " Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.200927 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea504cd-94bd-4526-a1c8-e26fc2bcb918-kube-api-access-5fmff" (OuterVolumeSpecName: "kube-api-access-5fmff") pod "bea504cd-94bd-4526-a1c8-e26fc2bcb918" (UID: "bea504cd-94bd-4526-a1c8-e26fc2bcb918"). InnerVolumeSpecName "kube-api-access-5fmff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.238317 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7xwm"] Feb 24 15:11:58 crc kubenswrapper[4982]: E0224 15:11:58.238802 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea504cd-94bd-4526-a1c8-e26fc2bcb918" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.238815 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea504cd-94bd-4526-a1c8-e26fc2bcb918" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: E0224 15:11:58.238832 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650ddaaa-9b21-4a69-bbf7-d563a40d7c61" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.238839 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="650ddaaa-9b21-4a69-bbf7-d563a40d7c61" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: E0224 15:11:58.238855 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7115b8-e3e3-42b8-bf78-964d9b28ff90" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.238861 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7115b8-e3e3-42b8-bf78-964d9b28ff90" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.239040 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7115b8-e3e3-42b8-bf78-964d9b28ff90" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.239063 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea504cd-94bd-4526-a1c8-e26fc2bcb918" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.239071 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="650ddaaa-9b21-4a69-bbf7-d563a40d7c61" containerName="init" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.239776 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.246042 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-kube-api-access-bcdvl" (OuterVolumeSpecName: "kube-api-access-bcdvl") pod "650ddaaa-9b21-4a69-bbf7-d563a40d7c61" (UID: "650ddaaa-9b21-4a69-bbf7-d563a40d7c61"). InnerVolumeSpecName "kube-api-access-bcdvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.257349 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-config" (OuterVolumeSpecName: "config") pod "650ddaaa-9b21-4a69-bbf7-d563a40d7c61" (UID: "650ddaaa-9b21-4a69-bbf7-d563a40d7c61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.267554 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-kube-api-access-w2hx8" (OuterVolumeSpecName: "kube-api-access-w2hx8") pod "1d7115b8-e3e3-42b8-bf78-964d9b28ff90" (UID: "1d7115b8-e3e3-42b8-bf78-964d9b28ff90"). InnerVolumeSpecName "kube-api-access-w2hx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.270917 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7xwm"] Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.275180 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fmff\" (UniqueName: \"kubernetes.io/projected/bea504cd-94bd-4526-a1c8-e26fc2bcb918-kube-api-access-5fmff\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.276059 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.276151 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2hx8\" (UniqueName: \"kubernetes.io/projected/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-kube-api-access-w2hx8\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.276226 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcdvl\" (UniqueName: \"kubernetes.io/projected/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-kube-api-access-bcdvl\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.291877 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "650ddaaa-9b21-4a69-bbf7-d563a40d7c61" (UID: "650ddaaa-9b21-4a69-bbf7-d563a40d7c61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.293356 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d7115b8-e3e3-42b8-bf78-964d9b28ff90" (UID: "1d7115b8-e3e3-42b8-bf78-964d9b28ff90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.298928 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bea504cd-94bd-4526-a1c8-e26fc2bcb918" (UID: "bea504cd-94bd-4526-a1c8-e26fc2bcb918"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.329657 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-config" (OuterVolumeSpecName: "config") pod "bea504cd-94bd-4526-a1c8-e26fc2bcb918" (UID: "bea504cd-94bd-4526-a1c8-e26fc2bcb918"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.380997 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpk26\" (UniqueName: \"kubernetes.io/projected/2358d960-bc18-4459-af89-9b2c0ae9081b-kube-api-access-mpk26\") pod \"mysqld-exporter-openstack-db-create-f7xwm\" (UID: \"2358d960-bc18-4459-af89-9b2c0ae9081b\") " pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.381125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2358d960-bc18-4459-af89-9b2c0ae9081b-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f7xwm\" (UID: \"2358d960-bc18-4459-af89-9b2c0ae9081b\") " pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.381246 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.381260 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.381274 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea504cd-94bd-4526-a1c8-e26fc2bcb918-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.381283 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650ddaaa-9b21-4a69-bbf7-d563a40d7c61-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.437976 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-3d1b-account-create-update-zdkh5"] Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.439336 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.443414 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.448654 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-3d1b-account-create-update-zdkh5"] Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.468131 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-config" (OuterVolumeSpecName: "config") pod "1d7115b8-e3e3-42b8-bf78-964d9b28ff90" (UID: "1d7115b8-e3e3-42b8-bf78-964d9b28ff90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.483630 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2358d960-bc18-4459-af89-9b2c0ae9081b-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f7xwm\" (UID: \"2358d960-bc18-4459-af89-9b2c0ae9081b\") " pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.483779 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpk26\" (UniqueName: \"kubernetes.io/projected/2358d960-bc18-4459-af89-9b2c0ae9081b-kube-api-access-mpk26\") pod \"mysqld-exporter-openstack-db-create-f7xwm\" (UID: \"2358d960-bc18-4459-af89-9b2c0ae9081b\") " pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.483840 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d7115b8-e3e3-42b8-bf78-964d9b28ff90-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.484369 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2358d960-bc18-4459-af89-9b2c0ae9081b-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-f7xwm\" (UID: \"2358d960-bc18-4459-af89-9b2c0ae9081b\") " pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.530863 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpk26\" (UniqueName: \"kubernetes.io/projected/2358d960-bc18-4459-af89-9b2c0ae9081b-kube-api-access-mpk26\") pod \"mysqld-exporter-openstack-db-create-f7xwm\" (UID: \"2358d960-bc18-4459-af89-9b2c0ae9081b\") " pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.588272 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-operator-scripts\") pod \"mysqld-exporter-3d1b-account-create-update-zdkh5\" (UID: \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\") " pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.589121 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2qpg\" (UniqueName: \"kubernetes.io/projected/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-kube-api-access-v2qpg\") pod \"mysqld-exporter-3d1b-account-create-update-zdkh5\" (UID: \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\") " pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.600644 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" event={"ID":"1d7115b8-e3e3-42b8-bf78-964d9b28ff90","Type":"ContainerDied","Data":"d241cfe696ea6c73412af705d9185c1c969c4ca88bb6212900162e790f3b6461"} Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.600666 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vvxd8" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.600698 4982 scope.go:117] "RemoveContainer" containerID="8f389fcea7c9dbdb1df5668a7ddf0fb140731ae66d1a5c3182c502fae0a30520" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.602197 4982 generic.go:334] "Generic (PLEG): container finished" podID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerID="3cba6fe01c06b31027a2c8b1fb1adcd803560abb67f4eb8d66faf7d1c6d63cd8" exitCode=0 Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.602275 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerDied","Data":"3cba6fe01c06b31027a2c8b1fb1adcd803560abb67f4eb8d66faf7d1c6d63cd8"} Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.604475 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" event={"ID":"650ddaaa-9b21-4a69-bbf7-d563a40d7c61","Type":"ContainerDied","Data":"ae00f01e73c43e61115529378c14b6debaa6d04a2e3a93db861eeaa73b032f82"} Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.604573 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-wkkzz" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.607885 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" event={"ID":"bea504cd-94bd-4526-a1c8-e26fc2bcb918","Type":"ContainerDied","Data":"44a71638b4b49056dde35e8063a1d7de2edded9a95ea73a4970a6532991d8dba"} Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.608665 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ghtrh" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.612133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" event={"ID":"612828ed-0dec-4a50-b9af-2e9e63864167","Type":"ContainerStarted","Data":"e65a5b62b701be4a32fd5be5e8e37052a0d069f1465949ba316bc1b9eda32b7d"} Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.616148 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" event={"ID":"75ea04fc-ab4f-446c-a3f2-2e82f387068b","Type":"ContainerStarted","Data":"870b4e921ea87a79cf0350f0272a1310633ce47c2f136c16eea306184367d370"} Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.691739 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-operator-scripts\") pod \"mysqld-exporter-3d1b-account-create-update-zdkh5\" (UID: \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\") " pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.691931 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2qpg\" (UniqueName: \"kubernetes.io/projected/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-kube-api-access-v2qpg\") pod \"mysqld-exporter-3d1b-account-create-update-zdkh5\" (UID: \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\") " pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.692606 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-operator-scripts\") pod \"mysqld-exporter-3d1b-account-create-update-zdkh5\" (UID: \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\") " pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.714919 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2qpg\" (UniqueName: \"kubernetes.io/projected/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-kube-api-access-v2qpg\") pod \"mysqld-exporter-3d1b-account-create-update-zdkh5\" (UID: \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\") " pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.728489 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-wkkzz"] Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.761426 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-wkkzz"] Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.782936 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.786745 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.834364 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vvxd8"] Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.852782 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vvxd8"] Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.896835 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghtrh"] Feb 24 15:11:58 crc kubenswrapper[4982]: I0224 15:11:58.903785 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ghtrh"] Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.018602 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b0b6-account-create-update-zvgjp"] Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.037667 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pbbft"] Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.056713 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8fv24"] Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.118210 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-04b1-account-create-update-pknzw"] Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.184437 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7115b8-e3e3-42b8-bf78-964d9b28ff90" path="/var/lib/kubelet/pods/1d7115b8-e3e3-42b8-bf78-964d9b28ff90/volumes" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.193675 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650ddaaa-9b21-4a69-bbf7-d563a40d7c61" path="/var/lib/kubelet/pods/650ddaaa-9b21-4a69-bbf7-d563a40d7c61/volumes" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.194825 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea504cd-94bd-4526-a1c8-e26fc2bcb918" path="/var/lib/kubelet/pods/bea504cd-94bd-4526-a1c8-e26fc2bcb918/volumes" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.195459 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 15:11:59 crc kubenswrapper[4982]: W0224 15:11:59.584634 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd310a49_0087_4874_82ba_63f6d002bf18.slice/crio-f25c9a132d6677334144502ebd1a96a2b31152037904edbc8e94165c1ca2dc22 WatchSource:0}: Error finding container f25c9a132d6677334144502ebd1a96a2b31152037904edbc8e94165c1ca2dc22: Status 404 returned error can't find the container with id f25c9a132d6677334144502ebd1a96a2b31152037904edbc8e94165c1ca2dc22 Feb 24 15:11:59 crc kubenswrapper[4982]: W0224 15:11:59.587609 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd043fa3e_3790_470f_8b7d_743969cede03.slice/crio-aa14d7a1f063ca02dc0581aaa5d8dfdd96a0011d775140f8a6aee9bc12abf3c3 WatchSource:0}: Error finding container aa14d7a1f063ca02dc0581aaa5d8dfdd96a0011d775140f8a6aee9bc12abf3c3: Status 404 returned error can't find the container with id aa14d7a1f063ca02dc0581aaa5d8dfdd96a0011d775140f8a6aee9bc12abf3c3 Feb 24 15:11:59 crc kubenswrapper[4982]: W0224 15:11:59.591298 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd66ea93a_f4f2_47c2_a9da_27a82afa75bd.slice/crio-f07683a2f4249e6cb1f17aaad71e8ea2f5d503b721624f287db2a438dac16df6 WatchSource:0}: Error finding container f07683a2f4249e6cb1f17aaad71e8ea2f5d503b721624f287db2a438dac16df6: Status 404 returned error can't find the container with id f07683a2f4249e6cb1f17aaad71e8ea2f5d503b721624f287db2a438dac16df6 Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.622301 4982 scope.go:117] "RemoveContainer" containerID="d1eca7ea2ee6ce98e7f290fbfc2011af522870dea80dda247b9cefedcdff9c11" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.637272 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pbbft" event={"ID":"fd310a49-0087-4874-82ba-63f6d002bf18","Type":"ContainerStarted","Data":"f25c9a132d6677334144502ebd1a96a2b31152037904edbc8e94165c1ca2dc22"} Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.639400 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fv24" event={"ID":"d043fa3e-3790-470f-8b7d-743969cede03","Type":"ContainerStarted","Data":"aa14d7a1f063ca02dc0581aaa5d8dfdd96a0011d775140f8a6aee9bc12abf3c3"} Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.642097 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"86cf4035-92fa-4d4d-9ce7-ca961da212c2","Type":"ContainerStarted","Data":"abf97efafea44ad586ca197f67435a8075caf2caeb0e0a68a18f41680b162f31"} Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.643557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-04b1-account-create-update-pknzw" event={"ID":"d66ea93a-f4f2-47c2-a9da-27a82afa75bd","Type":"ContainerStarted","Data":"f07683a2f4249e6cb1f17aaad71e8ea2f5d503b721624f287db2a438dac16df6"} Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.645576 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b0b6-account-create-update-zvgjp" event={"ID":"99273cd9-0c56-4024-9df0-e8a8294bb346","Type":"ContainerStarted","Data":"211284470cc0bad9b9ede3b56758336d07bc47af49a6ae21d9ea44313dd35c39"} Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.645632 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.645800 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.655044 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.671811 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.686905 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" podStartSLOduration=7.946540089 podStartE2EDuration="10.68688252s" podCreationTimestamp="2026-02-24 15:11:49 +0000 UTC" firstStartedPulling="2026-02-24 15:11:50.824978662 +0000 UTC m=+1372.444037155" lastFinishedPulling="2026-02-24 15:11:53.565321093 +0000 UTC m=+1375.184379586" observedRunningTime="2026-02-24 15:11:59.667296257 +0000 UTC m=+1381.286354750" watchObservedRunningTime="2026-02-24 15:11:59.68688252 +0000 UTC m=+1381.305941013" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.742168 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" podStartSLOduration=7.466196368 podStartE2EDuration="9.742146602s" podCreationTimestamp="2026-02-24 15:11:50 +0000 UTC" firstStartedPulling="2026-02-24 15:11:51.290829478 +0000 UTC m=+1372.909887971" lastFinishedPulling="2026-02-24 15:11:53.566779712 +0000 UTC m=+1375.185838205" observedRunningTime="2026-02-24 15:11:59.69791881 +0000 UTC m=+1381.316977293" watchObservedRunningTime="2026-02-24 15:11:59.742146602 +0000 UTC m=+1381.361205095" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.807666 4982 scope.go:117] "RemoveContainer" containerID="02019b8d007b4d8817ba99b2d2f05b0494ef2bef795e050b92d5ace2e3db581e" Feb 24 15:11:59 crc kubenswrapper[4982]: I0224 15:11:59.983079 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.165467 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.247595 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.262814 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532432-rp655"] Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.265310 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532432-rp655" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.276765 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532432-rp655"] Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.278233 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.278438 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.278510 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.305095 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-3d1b-account-create-update-zdkh5"] Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.346400 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7xwm"] Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.362986 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7vht\" (UniqueName: \"kubernetes.io/projected/6e68d2ce-821e-4070-affc-79adbc1a34ca-kube-api-access-l7vht\") pod \"auto-csr-approver-29532432-rp655\" (UID: \"6e68d2ce-821e-4070-affc-79adbc1a34ca\") " pod="openshift-infra/auto-csr-approver-29532432-rp655" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.468002 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7vht\" (UniqueName: \"kubernetes.io/projected/6e68d2ce-821e-4070-affc-79adbc1a34ca-kube-api-access-l7vht\") pod \"auto-csr-approver-29532432-rp655\" (UID: \"6e68d2ce-821e-4070-affc-79adbc1a34ca\") " pod="openshift-infra/auto-csr-approver-29532432-rp655" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.484677 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7vht\" (UniqueName: \"kubernetes.io/projected/6e68d2ce-821e-4070-affc-79adbc1a34ca-kube-api-access-l7vht\") pod \"auto-csr-approver-29532432-rp655\" (UID: \"6e68d2ce-821e-4070-affc-79adbc1a34ca\") " pod="openshift-infra/auto-csr-approver-29532432-rp655" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.624189 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532432-rp655" Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.675551 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" event={"ID":"ed0c055f-9963-4ca0-8a2b-dfda0d72f960","Type":"ContainerStarted","Data":"113d027367c5c62e73d0b4cfeaeb3e9024aef9574a944d5f7580595c9133bfea"} Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.681929 4982 generic.go:334] "Generic (PLEG): container finished" podID="99273cd9-0c56-4024-9df0-e8a8294bb346" containerID="7040ec195045fad266d02bb9c64d6b39c1e00ef0b4ab49c83dae1775a1e36ce3" exitCode=0 Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.682010 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b0b6-account-create-update-zvgjp" event={"ID":"99273cd9-0c56-4024-9df0-e8a8294bb346","Type":"ContainerDied","Data":"7040ec195045fad266d02bb9c64d6b39c1e00ef0b4ab49c83dae1775a1e36ce3"} Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.684176 4982 generic.go:334] "Generic (PLEG): container finished" podID="fd310a49-0087-4874-82ba-63f6d002bf18" containerID="465d0f849753128a5488b00177622240b7a727f0452af3e8ae08f6833bbd8e55" exitCode=0 Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.684300 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pbbft" event={"ID":"fd310a49-0087-4874-82ba-63f6d002bf18","Type":"ContainerDied","Data":"465d0f849753128a5488b00177622240b7a727f0452af3e8ae08f6833bbd8e55"} Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.686032 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" event={"ID":"2358d960-bc18-4459-af89-9b2c0ae9081b","Type":"ContainerStarted","Data":"8a044d6e339a826bec05d454fc92eb669900eac778f180fa3b81554cb316116a"} Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.688089 4982 generic.go:334] "Generic (PLEG): container finished" podID="d66ea93a-f4f2-47c2-a9da-27a82afa75bd" containerID="65d5456fd15006a55ef32dad2687d336952977f5d65d2c10c5a46b44b1dca752" exitCode=0 Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.688183 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-04b1-account-create-update-pknzw" event={"ID":"d66ea93a-f4f2-47c2-a9da-27a82afa75bd","Type":"ContainerDied","Data":"65d5456fd15006a55ef32dad2687d336952977f5d65d2c10c5a46b44b1dca752"} Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.690581 4982 generic.go:334] "Generic (PLEG): container finished" podID="d043fa3e-3790-470f-8b7d-743969cede03" containerID="99561028cf3107e0e1773a0cbb9bd019fc00c210cec2558faae75c76f45e4914" exitCode=0 Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.690633 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fv24" event={"ID":"d043fa3e-3790-470f-8b7d-743969cede03","Type":"ContainerDied","Data":"99561028cf3107e0e1773a0cbb9bd019fc00c210cec2558faae75c76f45e4914"} Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.694981 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwcwd" event={"ID":"e5321445-9e2b-44c7-9975-2bfe929ead53","Type":"ContainerStarted","Data":"5f301a72a2744fd87eabab57f55b3e0ef90d45db1e404cb3782a864c338f7db7"} Feb 24 15:12:00 crc kubenswrapper[4982]: I0224 15:12:00.723189 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cwcwd" podStartSLOduration=2.166034825 podStartE2EDuration="10.723166887s" podCreationTimestamp="2026-02-24 15:11:50 +0000 UTC" firstStartedPulling="2026-02-24 15:11:51.140697226 +0000 UTC m=+1372.759755719" lastFinishedPulling="2026-02-24 15:11:59.697829288 +0000 UTC m=+1381.316887781" observedRunningTime="2026-02-24 15:12:00.718007336 +0000 UTC m=+1382.337065829" watchObservedRunningTime="2026-02-24 15:12:00.723166887 +0000 UTC m=+1382.342225380" Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.499612 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532432-rp655"] Feb 24 15:12:01 crc kubenswrapper[4982]: W0224 15:12:01.523268 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e68d2ce_821e_4070_affc_79adbc1a34ca.slice/crio-0b71431a82550f16af446e9011c700253efc936af167705e52be73ba4993f33b WatchSource:0}: Error finding container 0b71431a82550f16af446e9011c700253efc936af167705e52be73ba4993f33b: Status 404 returned error can't find the container with id 0b71431a82550f16af446e9011c700253efc936af167705e52be73ba4993f33b Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.720109 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"86cf4035-92fa-4d4d-9ce7-ca961da212c2","Type":"ContainerStarted","Data":"89e4a1642b808d27cb1dff1059d52a74d5ed2b582b5c380dad2e5e4fa3dd7ca4"} Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.720166 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"86cf4035-92fa-4d4d-9ce7-ca961da212c2","Type":"ContainerStarted","Data":"8e29d73fcfd76c6c4208dd6db3f26b4d42631bfd008b14c65f201bcb5180ec63"} Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.721596 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.728426 4982 generic.go:334] "Generic (PLEG): container finished" podID="2358d960-bc18-4459-af89-9b2c0ae9081b" containerID="7ad36b7044bc73c2586c165e2610953a3c3880189b937af0933740f312ef562e" exitCode=0 Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.728474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" event={"ID":"2358d960-bc18-4459-af89-9b2c0ae9081b","Type":"ContainerDied","Data":"7ad36b7044bc73c2586c165e2610953a3c3880189b937af0933740f312ef562e"} Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.730100 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532432-rp655" event={"ID":"6e68d2ce-821e-4070-affc-79adbc1a34ca","Type":"ContainerStarted","Data":"0b71431a82550f16af446e9011c700253efc936af167705e52be73ba4993f33b"} Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.731663 4982 generic.go:334] "Generic (PLEG): container finished" podID="ed0c055f-9963-4ca0-8a2b-dfda0d72f960" containerID="872a38f2f62e8e179addec5995688c6a9c7e58021a67c8125908076eb34d1d8d" exitCode=0 Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.731682 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" event={"ID":"ed0c055f-9963-4ca0-8a2b-dfda0d72f960","Type":"ContainerDied","Data":"872a38f2f62e8e179addec5995688c6a9c7e58021a67c8125908076eb34d1d8d"} Feb 24 15:12:01 crc kubenswrapper[4982]: I0224 15:12:01.754543 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=6.2760374500000005 podStartE2EDuration="7.75452344s" podCreationTimestamp="2026-02-24 15:11:54 +0000 UTC" firstStartedPulling="2026-02-24 15:11:59.606876215 +0000 UTC m=+1381.225934708" lastFinishedPulling="2026-02-24 15:12:01.085362185 +0000 UTC m=+1382.704420698" observedRunningTime="2026-02-24 15:12:01.73836991 +0000 UTC m=+1383.357428413" watchObservedRunningTime="2026-02-24 15:12:01.75452344 +0000 UTC m=+1383.373581923" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.257763 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fv24" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.424994 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsrsh\" (UniqueName: \"kubernetes.io/projected/d043fa3e-3790-470f-8b7d-743969cede03-kube-api-access-tsrsh\") pod \"d043fa3e-3790-470f-8b7d-743969cede03\" (UID: \"d043fa3e-3790-470f-8b7d-743969cede03\") " Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.425259 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d043fa3e-3790-470f-8b7d-743969cede03-operator-scripts\") pod \"d043fa3e-3790-470f-8b7d-743969cede03\" (UID: \"d043fa3e-3790-470f-8b7d-743969cede03\") " Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.426736 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d043fa3e-3790-470f-8b7d-743969cede03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d043fa3e-3790-470f-8b7d-743969cede03" (UID: "d043fa3e-3790-470f-8b7d-743969cede03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.450851 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d043fa3e-3790-470f-8b7d-743969cede03-kube-api-access-tsrsh" (OuterVolumeSpecName: "kube-api-access-tsrsh") pod "d043fa3e-3790-470f-8b7d-743969cede03" (UID: "d043fa3e-3790-470f-8b7d-743969cede03"). InnerVolumeSpecName "kube-api-access-tsrsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.528762 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d043fa3e-3790-470f-8b7d-743969cede03-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.528804 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsrsh\" (UniqueName: \"kubernetes.io/projected/d043fa3e-3790-470f-8b7d-743969cede03-kube-api-access-tsrsh\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.703651 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qzglz"] Feb 24 15:12:02 crc kubenswrapper[4982]: E0224 15:12:02.704322 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d043fa3e-3790-470f-8b7d-743969cede03" containerName="mariadb-database-create" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.704487 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d043fa3e-3790-470f-8b7d-743969cede03" containerName="mariadb-database-create" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.704872 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d043fa3e-3790-470f-8b7d-743969cede03" containerName="mariadb-database-create" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.705876 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.708003 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.712737 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qzglz"] Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.728510 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pbbft" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.741036 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.781251 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-04b1-account-create-update-pknzw" event={"ID":"d66ea93a-f4f2-47c2-a9da-27a82afa75bd","Type":"ContainerDied","Data":"f07683a2f4249e6cb1f17aaad71e8ea2f5d503b721624f287db2a438dac16df6"} Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.781289 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f07683a2f4249e6cb1f17aaad71e8ea2f5d503b721624f287db2a438dac16df6" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.781379 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-04b1-account-create-update-pknzw" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.782394 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.783237 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b0b6-account-create-update-zvgjp" event={"ID":"99273cd9-0c56-4024-9df0-e8a8294bb346","Type":"ContainerDied","Data":"211284470cc0bad9b9ede3b56758336d07bc47af49a6ae21d9ea44313dd35c39"} Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.783262 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="211284470cc0bad9b9ede3b56758336d07bc47af49a6ae21d9ea44313dd35c39" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.784640 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pbbft" event={"ID":"fd310a49-0087-4874-82ba-63f6d002bf18","Type":"ContainerDied","Data":"f25c9a132d6677334144502ebd1a96a2b31152037904edbc8e94165c1ca2dc22"} Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.784664 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f25c9a132d6677334144502ebd1a96a2b31152037904edbc8e94165c1ca2dc22" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.784702 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pbbft" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.789750 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8fv24" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.789972 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8fv24" event={"ID":"d043fa3e-3790-470f-8b7d-743969cede03","Type":"ContainerDied","Data":"aa14d7a1f063ca02dc0581aaa5d8dfdd96a0011d775140f8a6aee9bc12abf3c3"} Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.790000 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa14d7a1f063ca02dc0581aaa5d8dfdd96a0011d775140f8a6aee9bc12abf3c3" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.835057 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-operator-scripts\") pod \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\" (UID: \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\") " Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.835147 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd310a49-0087-4874-82ba-63f6d002bf18-operator-scripts\") pod \"fd310a49-0087-4874-82ba-63f6d002bf18\" (UID: \"fd310a49-0087-4874-82ba-63f6d002bf18\") " Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.835282 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rwp\" (UniqueName: \"kubernetes.io/projected/fd310a49-0087-4874-82ba-63f6d002bf18-kube-api-access-x7rwp\") pod \"fd310a49-0087-4874-82ba-63f6d002bf18\" (UID: \"fd310a49-0087-4874-82ba-63f6d002bf18\") " Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.835398 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr9k6\" (UniqueName: \"kubernetes.io/projected/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-kube-api-access-qr9k6\") pod \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\" (UID: \"d66ea93a-f4f2-47c2-a9da-27a82afa75bd\") " Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.836195 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk6br\" (UniqueName: \"kubernetes.io/projected/8725b3ed-7ded-4914-9d51-4f91d7901929-kube-api-access-mk6br\") pod \"root-account-create-update-qzglz\" (UID: \"8725b3ed-7ded-4914-9d51-4f91d7901929\") " pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.836259 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8725b3ed-7ded-4914-9d51-4f91d7901929-operator-scripts\") pod \"root-account-create-update-qzglz\" (UID: \"8725b3ed-7ded-4914-9d51-4f91d7901929\") " pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.836865 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd310a49-0087-4874-82ba-63f6d002bf18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd310a49-0087-4874-82ba-63f6d002bf18" (UID: "fd310a49-0087-4874-82ba-63f6d002bf18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.837260 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d66ea93a-f4f2-47c2-a9da-27a82afa75bd" (UID: "d66ea93a-f4f2-47c2-a9da-27a82afa75bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.840827 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd310a49-0087-4874-82ba-63f6d002bf18-kube-api-access-x7rwp" (OuterVolumeSpecName: "kube-api-access-x7rwp") pod "fd310a49-0087-4874-82ba-63f6d002bf18" (UID: "fd310a49-0087-4874-82ba-63f6d002bf18"). InnerVolumeSpecName "kube-api-access-x7rwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.848303 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-kube-api-access-qr9k6" (OuterVolumeSpecName: "kube-api-access-qr9k6") pod "d66ea93a-f4f2-47c2-a9da-27a82afa75bd" (UID: "d66ea93a-f4f2-47c2-a9da-27a82afa75bd"). InnerVolumeSpecName "kube-api-access-qr9k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.937448 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99273cd9-0c56-4024-9df0-e8a8294bb346-operator-scripts\") pod \"99273cd9-0c56-4024-9df0-e8a8294bb346\" (UID: \"99273cd9-0c56-4024-9df0-e8a8294bb346\") " Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.937486 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5s26\" (UniqueName: \"kubernetes.io/projected/99273cd9-0c56-4024-9df0-e8a8294bb346-kube-api-access-p5s26\") pod \"99273cd9-0c56-4024-9df0-e8a8294bb346\" (UID: \"99273cd9-0c56-4024-9df0-e8a8294bb346\") " Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.938098 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk6br\" (UniqueName: \"kubernetes.io/projected/8725b3ed-7ded-4914-9d51-4f91d7901929-kube-api-access-mk6br\") pod \"root-account-create-update-qzglz\" (UID: \"8725b3ed-7ded-4914-9d51-4f91d7901929\") " pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.938144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8725b3ed-7ded-4914-9d51-4f91d7901929-operator-scripts\") pod \"root-account-create-update-qzglz\" (UID: \"8725b3ed-7ded-4914-9d51-4f91d7901929\") " pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.938240 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.938251 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd310a49-0087-4874-82ba-63f6d002bf18-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.938260 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rwp\" (UniqueName: \"kubernetes.io/projected/fd310a49-0087-4874-82ba-63f6d002bf18-kube-api-access-x7rwp\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.938323 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr9k6\" (UniqueName: \"kubernetes.io/projected/d66ea93a-f4f2-47c2-a9da-27a82afa75bd-kube-api-access-qr9k6\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.939206 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99273cd9-0c56-4024-9df0-e8a8294bb346-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99273cd9-0c56-4024-9df0-e8a8294bb346" (UID: "99273cd9-0c56-4024-9df0-e8a8294bb346"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.939979 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8725b3ed-7ded-4914-9d51-4f91d7901929-operator-scripts\") pod \"root-account-create-update-qzglz\" (UID: \"8725b3ed-7ded-4914-9d51-4f91d7901929\") " pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.943192 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99273cd9-0c56-4024-9df0-e8a8294bb346-kube-api-access-p5s26" (OuterVolumeSpecName: "kube-api-access-p5s26") pod "99273cd9-0c56-4024-9df0-e8a8294bb346" (UID: "99273cd9-0c56-4024-9df0-e8a8294bb346"). InnerVolumeSpecName "kube-api-access-p5s26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:02 crc kubenswrapper[4982]: I0224 15:12:02.963275 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk6br\" (UniqueName: \"kubernetes.io/projected/8725b3ed-7ded-4914-9d51-4f91d7901929-kube-api-access-mk6br\") pod \"root-account-create-update-qzglz\" (UID: \"8725b3ed-7ded-4914-9d51-4f91d7901929\") " pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.044095 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99273cd9-0c56-4024-9df0-e8a8294bb346-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.044392 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5s26\" (UniqueName: \"kubernetes.io/projected/99273cd9-0c56-4024-9df0-e8a8294bb346-kube-api-access-p5s26\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.086740 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.532567 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.643483 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.675348 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpk26\" (UniqueName: \"kubernetes.io/projected/2358d960-bc18-4459-af89-9b2c0ae9081b-kube-api-access-mpk26\") pod \"2358d960-bc18-4459-af89-9b2c0ae9081b\" (UID: \"2358d960-bc18-4459-af89-9b2c0ae9081b\") " Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.675469 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2358d960-bc18-4459-af89-9b2c0ae9081b-operator-scripts\") pod \"2358d960-bc18-4459-af89-9b2c0ae9081b\" (UID: \"2358d960-bc18-4459-af89-9b2c0ae9081b\") " Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.677140 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2358d960-bc18-4459-af89-9b2c0ae9081b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2358d960-bc18-4459-af89-9b2c0ae9081b" (UID: "2358d960-bc18-4459-af89-9b2c0ae9081b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.687165 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2358d960-bc18-4459-af89-9b2c0ae9081b-kube-api-access-mpk26" (OuterVolumeSpecName: "kube-api-access-mpk26") pod "2358d960-bc18-4459-af89-9b2c0ae9081b" (UID: "2358d960-bc18-4459-af89-9b2c0ae9081b"). InnerVolumeSpecName "kube-api-access-mpk26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.777422 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2qpg\" (UniqueName: \"kubernetes.io/projected/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-kube-api-access-v2qpg\") pod \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\" (UID: \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\") " Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.777463 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-operator-scripts\") pod \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\" (UID: \"ed0c055f-9963-4ca0-8a2b-dfda0d72f960\") " Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.778080 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpk26\" (UniqueName: \"kubernetes.io/projected/2358d960-bc18-4459-af89-9b2c0ae9081b-kube-api-access-mpk26\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.778101 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2358d960-bc18-4459-af89-9b2c0ae9081b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.781701 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed0c055f-9963-4ca0-8a2b-dfda0d72f960" (UID: "ed0c055f-9963-4ca0-8a2b-dfda0d72f960"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.784930 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-kube-api-access-v2qpg" (OuterVolumeSpecName: "kube-api-access-v2qpg") pod "ed0c055f-9963-4ca0-8a2b-dfda0d72f960" (UID: "ed0c055f-9963-4ca0-8a2b-dfda0d72f960"). InnerVolumeSpecName "kube-api-access-v2qpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.811464 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" event={"ID":"ed0c055f-9963-4ca0-8a2b-dfda0d72f960","Type":"ContainerDied","Data":"113d027367c5c62e73d0b4cfeaeb3e9024aef9574a944d5f7580595c9133bfea"} Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.811521 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113d027367c5c62e73d0b4cfeaeb3e9024aef9574a944d5f7580595c9133bfea" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.811575 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3d1b-account-create-update-zdkh5" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.825452 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" event={"ID":"2358d960-bc18-4459-af89-9b2c0ae9081b","Type":"ContainerDied","Data":"8a044d6e339a826bec05d454fc92eb669900eac778f180fa3b81554cb316116a"} Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.825492 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a044d6e339a826bec05d454fc92eb669900eac778f180fa3b81554cb316116a" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.825579 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-f7xwm" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.849585 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532432-rp655" event={"ID":"6e68d2ce-821e-4070-affc-79adbc1a34ca","Type":"ContainerStarted","Data":"921ca0896c665205ebccaaf525e38ee0f6bfdfee37f158caa1b580766b1cf3f9"} Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.849718 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b0b6-account-create-update-zvgjp" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.851566 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qzglz"] Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.886848 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2qpg\" (UniqueName: \"kubernetes.io/projected/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-kube-api-access-v2qpg\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.886890 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c055f-9963-4ca0-8a2b-dfda0d72f960-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:03 crc kubenswrapper[4982]: I0224 15:12:03.895732 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532432-rp655" podStartSLOduration=2.801815527 podStartE2EDuration="3.89571283s" podCreationTimestamp="2026-02-24 15:12:00 +0000 UTC" firstStartedPulling="2026-02-24 15:12:01.526473879 +0000 UTC m=+1383.145532372" lastFinishedPulling="2026-02-24 15:12:02.620371182 +0000 UTC m=+1384.239429675" observedRunningTime="2026-02-24 15:12:03.870868954 +0000 UTC m=+1385.489927447" watchObservedRunningTime="2026-02-24 15:12:03.89571283 +0000 UTC m=+1385.514771323" Feb 24 15:12:04 crc kubenswrapper[4982]: I0224 15:12:04.874162 4982 generic.go:334] "Generic (PLEG): container finished" podID="8725b3ed-7ded-4914-9d51-4f91d7901929" containerID="05e8453a295a1bab670a904c2395cbc6e654284e7234f8e28c21601d5612d66f" exitCode=0 Feb 24 15:12:04 crc kubenswrapper[4982]: I0224 15:12:04.874378 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qzglz" event={"ID":"8725b3ed-7ded-4914-9d51-4f91d7901929","Type":"ContainerDied","Data":"05e8453a295a1bab670a904c2395cbc6e654284e7234f8e28c21601d5612d66f"} Feb 24 15:12:04 crc kubenswrapper[4982]: I0224 15:12:04.875588 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qzglz" event={"ID":"8725b3ed-7ded-4914-9d51-4f91d7901929","Type":"ContainerStarted","Data":"1bf667f33dab237172802bdaac5d35b8e938b02b4fd17c06ea7848280c47cd82"} Feb 24 15:12:04 crc kubenswrapper[4982]: I0224 15:12:04.882242 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e68d2ce-821e-4070-affc-79adbc1a34ca" containerID="921ca0896c665205ebccaaf525e38ee0f6bfdfee37f158caa1b580766b1cf3f9" exitCode=0 Feb 24 15:12:04 crc kubenswrapper[4982]: I0224 15:12:04.882377 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532432-rp655" event={"ID":"6e68d2ce-821e-4070-affc-79adbc1a34ca","Type":"ContainerDied","Data":"921ca0896c665205ebccaaf525e38ee0f6bfdfee37f158caa1b580766b1cf3f9"} Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.163364 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.321540 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-544ccdb57f-qqrjk" podUID="a12a438e-edd4-4552-bfc2-0989f944b710" containerName="console" containerID="cri-o://fcfadd48e7a3a6ae0ee1992816b4229db009aa90d29fc9eaf88332ee13d48464" gracePeriod=15 Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.701666 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.784775 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-sn7zc"] Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.835939 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:12:05 crc kubenswrapper[4982]: E0224 15:12:05.836192 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 15:12:05 crc kubenswrapper[4982]: E0224 15:12:05.837026 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 15:12:05 crc kubenswrapper[4982]: E0224 15:12:05.837459 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift podName:b6daa16f-c9d9-465a-8d00-711f5ef84326 nodeName:}" failed. No retries permitted until 2026-02-24 15:12:21.837445486 +0000 UTC m=+1403.456503979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift") pod "swift-storage-0" (UID: "b6daa16f-c9d9-465a-8d00-711f5ef84326") : configmap "swift-ring-files" not found Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.911731 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-544ccdb57f-qqrjk_a12a438e-edd4-4552-bfc2-0989f944b710/console/0.log" Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.911796 4982 generic.go:334] "Generic (PLEG): container finished" podID="a12a438e-edd4-4552-bfc2-0989f944b710" containerID="fcfadd48e7a3a6ae0ee1992816b4229db009aa90d29fc9eaf88332ee13d48464" exitCode=2 Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.911913 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544ccdb57f-qqrjk" event={"ID":"a12a438e-edd4-4552-bfc2-0989f944b710","Type":"ContainerDied","Data":"fcfadd48e7a3a6ae0ee1992816b4229db009aa90d29fc9eaf88332ee13d48464"} Feb 24 15:12:05 crc kubenswrapper[4982]: I0224 15:12:05.912000 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" podUID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" containerName="dnsmasq-dns" containerID="cri-o://870b4e921ea87a79cf0350f0272a1310633ce47c2f136c16eea306184367d370" gracePeriod=10 Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.105375 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xr68t"] Feb 24 15:12:06 crc kubenswrapper[4982]: E0224 15:12:06.105939 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd310a49-0087-4874-82ba-63f6d002bf18" containerName="mariadb-database-create" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.105966 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd310a49-0087-4874-82ba-63f6d002bf18" containerName="mariadb-database-create" Feb 24 15:12:06 crc kubenswrapper[4982]: E0224 15:12:06.106009 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66ea93a-f4f2-47c2-a9da-27a82afa75bd" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106020 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66ea93a-f4f2-47c2-a9da-27a82afa75bd" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: E0224 15:12:06.106046 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0c055f-9963-4ca0-8a2b-dfda0d72f960" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106057 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0c055f-9963-4ca0-8a2b-dfda0d72f960" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: E0224 15:12:06.106072 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99273cd9-0c56-4024-9df0-e8a8294bb346" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106080 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="99273cd9-0c56-4024-9df0-e8a8294bb346" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: E0224 15:12:06.106098 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2358d960-bc18-4459-af89-9b2c0ae9081b" containerName="mariadb-database-create" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106107 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2358d960-bc18-4459-af89-9b2c0ae9081b" containerName="mariadb-database-create" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106361 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0c055f-9963-4ca0-8a2b-dfda0d72f960" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106385 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd310a49-0087-4874-82ba-63f6d002bf18" containerName="mariadb-database-create" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106402 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="99273cd9-0c56-4024-9df0-e8a8294bb346" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106430 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66ea93a-f4f2-47c2-a9da-27a82afa75bd" containerName="mariadb-account-create-update" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.106446 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2358d960-bc18-4459-af89-9b2c0ae9081b" containerName="mariadb-database-create" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.107375 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xr68t" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.112695 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xr68t"] Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.144937 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8w9s\" (UniqueName: \"kubernetes.io/projected/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-kube-api-access-f8w9s\") pod \"glance-db-create-xr68t\" (UID: \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\") " pod="openstack/glance-db-create-xr68t" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.145114 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-operator-scripts\") pod \"glance-db-create-xr68t\" (UID: \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\") " pod="openstack/glance-db-create-xr68t" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.250493 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-operator-scripts\") pod \"glance-db-create-xr68t\" (UID: \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\") " pod="openstack/glance-db-create-xr68t" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.250697 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8w9s\" (UniqueName: \"kubernetes.io/projected/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-kube-api-access-f8w9s\") pod \"glance-db-create-xr68t\" (UID: \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\") " pod="openstack/glance-db-create-xr68t" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.253343 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-operator-scripts\") pod \"glance-db-create-xr68t\" (UID: \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\") " pod="openstack/glance-db-create-xr68t" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.267038 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f120-account-create-update-7rgzv"] Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.268338 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.273128 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.276367 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8w9s\" (UniqueName: \"kubernetes.io/projected/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-kube-api-access-f8w9s\") pod \"glance-db-create-xr68t\" (UID: \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\") " pod="openstack/glance-db-create-xr68t" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.281800 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f120-account-create-update-7rgzv"] Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.352877 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnx5\" (UniqueName: \"kubernetes.io/projected/dec3ae4f-cea7-465c-b582-d075b477d5d0-kube-api-access-flnx5\") pod \"glance-f120-account-create-update-7rgzv\" (UID: \"dec3ae4f-cea7-465c-b582-d075b477d5d0\") " pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.353276 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec3ae4f-cea7-465c-b582-d075b477d5d0-operator-scripts\") pod \"glance-f120-account-create-update-7rgzv\" (UID: \"dec3ae4f-cea7-465c-b582-d075b477d5d0\") " pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.424846 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xr68t" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.455649 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flnx5\" (UniqueName: \"kubernetes.io/projected/dec3ae4f-cea7-465c-b582-d075b477d5d0-kube-api-access-flnx5\") pod \"glance-f120-account-create-update-7rgzv\" (UID: \"dec3ae4f-cea7-465c-b582-d075b477d5d0\") " pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.455724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec3ae4f-cea7-465c-b582-d075b477d5d0-operator-scripts\") pod \"glance-f120-account-create-update-7rgzv\" (UID: \"dec3ae4f-cea7-465c-b582-d075b477d5d0\") " pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.456536 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec3ae4f-cea7-465c-b582-d075b477d5d0-operator-scripts\") pod \"glance-f120-account-create-update-7rgzv\" (UID: \"dec3ae4f-cea7-465c-b582-d075b477d5d0\") " pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.481689 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flnx5\" (UniqueName: \"kubernetes.io/projected/dec3ae4f-cea7-465c-b582-d075b477d5d0-kube-api-access-flnx5\") pod \"glance-f120-account-create-update-7rgzv\" (UID: \"dec3ae4f-cea7-465c-b582-d075b477d5d0\") " pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.585109 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.589531 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532432-rp655" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.592673 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.664792 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8725b3ed-7ded-4914-9d51-4f91d7901929-operator-scripts\") pod \"8725b3ed-7ded-4914-9d51-4f91d7901929\" (UID: \"8725b3ed-7ded-4914-9d51-4f91d7901929\") " Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.664959 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7vht\" (UniqueName: \"kubernetes.io/projected/6e68d2ce-821e-4070-affc-79adbc1a34ca-kube-api-access-l7vht\") pod \"6e68d2ce-821e-4070-affc-79adbc1a34ca\" (UID: \"6e68d2ce-821e-4070-affc-79adbc1a34ca\") " Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.665020 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk6br\" (UniqueName: \"kubernetes.io/projected/8725b3ed-7ded-4914-9d51-4f91d7901929-kube-api-access-mk6br\") pod \"8725b3ed-7ded-4914-9d51-4f91d7901929\" (UID: \"8725b3ed-7ded-4914-9d51-4f91d7901929\") " Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.665935 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8725b3ed-7ded-4914-9d51-4f91d7901929-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8725b3ed-7ded-4914-9d51-4f91d7901929" (UID: "8725b3ed-7ded-4914-9d51-4f91d7901929"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.666422 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8725b3ed-7ded-4914-9d51-4f91d7901929-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.672988 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8725b3ed-7ded-4914-9d51-4f91d7901929-kube-api-access-mk6br" (OuterVolumeSpecName: "kube-api-access-mk6br") pod "8725b3ed-7ded-4914-9d51-4f91d7901929" (UID: "8725b3ed-7ded-4914-9d51-4f91d7901929"). InnerVolumeSpecName "kube-api-access-mk6br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.675277 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e68d2ce-821e-4070-affc-79adbc1a34ca-kube-api-access-l7vht" (OuterVolumeSpecName: "kube-api-access-l7vht") pod "6e68d2ce-821e-4070-affc-79adbc1a34ca" (UID: "6e68d2ce-821e-4070-affc-79adbc1a34ca"). InnerVolumeSpecName "kube-api-access-l7vht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.770068 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7vht\" (UniqueName: \"kubernetes.io/projected/6e68d2ce-821e-4070-affc-79adbc1a34ca-kube-api-access-l7vht\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.770561 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk6br\" (UniqueName: \"kubernetes.io/projected/8725b3ed-7ded-4914-9d51-4f91d7901929-kube-api-access-mk6br\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.927751 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532432-rp655" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.930746 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532432-rp655" event={"ID":"6e68d2ce-821e-4070-affc-79adbc1a34ca","Type":"ContainerDied","Data":"0b71431a82550f16af446e9011c700253efc936af167705e52be73ba4993f33b"} Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.930797 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b71431a82550f16af446e9011c700253efc936af167705e52be73ba4993f33b" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.932884 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qzglz" event={"ID":"8725b3ed-7ded-4914-9d51-4f91d7901929","Type":"ContainerDied","Data":"1bf667f33dab237172802bdaac5d35b8e938b02b4fd17c06ea7848280c47cd82"} Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.932911 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bf667f33dab237172802bdaac5d35b8e938b02b4fd17c06ea7848280c47cd82" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.932945 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qzglz" Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.935043 4982 generic.go:334] "Generic (PLEG): container finished" podID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" containerID="870b4e921ea87a79cf0350f0272a1310633ce47c2f136c16eea306184367d370" exitCode=0 Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.935110 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" event={"ID":"75ea04fc-ab4f-446c-a3f2-2e82f387068b","Type":"ContainerDied","Data":"870b4e921ea87a79cf0350f0272a1310633ce47c2f136c16eea306184367d370"} Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.941394 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532426-9jmrp"] Feb 24 15:12:06 crc kubenswrapper[4982]: I0224 15:12:06.958794 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532426-9jmrp"] Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.171418 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c115a13d-4dd0-433b-bb33-5e84dea1d390" path="/var/lib/kubelet/pods/c115a13d-4dd0-433b-bb33-5e84dea1d390/volumes" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.209067 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xr68t"] Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.304163 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-544ccdb57f-qqrjk_a12a438e-edd4-4552-bfc2-0989f944b710/console/0.log" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.304292 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.310868 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.321274 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f120-account-create-update-7rgzv"] Feb 24 15:12:07 crc kubenswrapper[4982]: W0224 15:12:07.337757 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddec3ae4f_cea7_465c_b582_d075b477d5d0.slice/crio-df1cfa78d452f50c8bd3ad726d3937d65ea03d53788a1e8891dcb283ae2a3ad8 WatchSource:0}: Error finding container df1cfa78d452f50c8bd3ad726d3937d65ea03d53788a1e8891dcb283ae2a3ad8: Status 404 returned error can't find the container with id df1cfa78d452f50c8bd3ad726d3937d65ea03d53788a1e8891dcb283ae2a3ad8 Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.396619 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-service-ca\") pod \"a12a438e-edd4-4552-bfc2-0989f944b710\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.396674 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-console-config\") pod \"a12a438e-edd4-4552-bfc2-0989f944b710\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.396716 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-serving-cert\") pod \"a12a438e-edd4-4552-bfc2-0989f944b710\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.396762 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-oauth-serving-cert\") pod \"a12a438e-edd4-4552-bfc2-0989f944b710\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.397636 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a12a438e-edd4-4552-bfc2-0989f944b710" (UID: "a12a438e-edd4-4552-bfc2-0989f944b710"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.397661 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-service-ca" (OuterVolumeSpecName: "service-ca") pod "a12a438e-edd4-4552-bfc2-0989f944b710" (UID: "a12a438e-edd4-4552-bfc2-0989f944b710"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.396911 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-ovsdbserver-sb\") pod \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.398037 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mcdx\" (UniqueName: \"kubernetes.io/projected/a12a438e-edd4-4552-bfc2-0989f944b710-kube-api-access-8mcdx\") pod \"a12a438e-edd4-4552-bfc2-0989f944b710\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.398145 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-dns-svc\") pod \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.398209 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-console-config" (OuterVolumeSpecName: "console-config") pod "a12a438e-edd4-4552-bfc2-0989f944b710" (UID: "a12a438e-edd4-4552-bfc2-0989f944b710"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.398298 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-config\") pod \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.398339 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-oauth-config\") pod \"a12a438e-edd4-4552-bfc2-0989f944b710\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.398386 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf48g\" (UniqueName: \"kubernetes.io/projected/75ea04fc-ab4f-446c-a3f2-2e82f387068b-kube-api-access-pf48g\") pod \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\" (UID: \"75ea04fc-ab4f-446c-a3f2-2e82f387068b\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.398446 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-trusted-ca-bundle\") pod \"a12a438e-edd4-4552-bfc2-0989f944b710\" (UID: \"a12a438e-edd4-4552-bfc2-0989f944b710\") " Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.399251 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.399269 4982 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.399283 4982 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.400227 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a12a438e-edd4-4552-bfc2-0989f944b710" (UID: "a12a438e-edd4-4552-bfc2-0989f944b710"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.402344 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a12a438e-edd4-4552-bfc2-0989f944b710" (UID: "a12a438e-edd4-4552-bfc2-0989f944b710"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.411663 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a12a438e-edd4-4552-bfc2-0989f944b710" (UID: "a12a438e-edd4-4552-bfc2-0989f944b710"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.418777 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ea04fc-ab4f-446c-a3f2-2e82f387068b-kube-api-access-pf48g" (OuterVolumeSpecName: "kube-api-access-pf48g") pod "75ea04fc-ab4f-446c-a3f2-2e82f387068b" (UID: "75ea04fc-ab4f-446c-a3f2-2e82f387068b"). InnerVolumeSpecName "kube-api-access-pf48g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.418817 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12a438e-edd4-4552-bfc2-0989f944b710-kube-api-access-8mcdx" (OuterVolumeSpecName: "kube-api-access-8mcdx") pod "a12a438e-edd4-4552-bfc2-0989f944b710" (UID: "a12a438e-edd4-4552-bfc2-0989f944b710"). InnerVolumeSpecName "kube-api-access-8mcdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.502080 4982 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.502117 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mcdx\" (UniqueName: \"kubernetes.io/projected/a12a438e-edd4-4552-bfc2-0989f944b710-kube-api-access-8mcdx\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.502136 4982 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a12a438e-edd4-4552-bfc2-0989f944b710-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.502146 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf48g\" (UniqueName: \"kubernetes.io/projected/75ea04fc-ab4f-446c-a3f2-2e82f387068b-kube-api-access-pf48g\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.502156 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12a438e-edd4-4552-bfc2-0989f944b710-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.529407 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75ea04fc-ab4f-446c-a3f2-2e82f387068b" (UID: "75ea04fc-ab4f-446c-a3f2-2e82f387068b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.543325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-config" (OuterVolumeSpecName: "config") pod "75ea04fc-ab4f-446c-a3f2-2e82f387068b" (UID: "75ea04fc-ab4f-446c-a3f2-2e82f387068b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.544148 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75ea04fc-ab4f-446c-a3f2-2e82f387068b" (UID: "75ea04fc-ab4f-446c-a3f2-2e82f387068b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.603852 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.603889 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.603897 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ea04fc-ab4f-446c-a3f2-2e82f387068b-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.959402 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xr68t" event={"ID":"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5","Type":"ContainerStarted","Data":"7fd97204f45901caa85fdf30221e98d8d0d49a7113d2558839e59f70ab081898"} Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.959453 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xr68t" event={"ID":"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5","Type":"ContainerStarted","Data":"6ccc996eff47479fc24d3f3d487f0bfb466564f85612232351aacf5ee797f4f2"} Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.963892 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-544ccdb57f-qqrjk_a12a438e-edd4-4552-bfc2-0989f944b710/console/0.log" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.964074 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544ccdb57f-qqrjk" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.967570 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544ccdb57f-qqrjk" event={"ID":"a12a438e-edd4-4552-bfc2-0989f944b710","Type":"ContainerDied","Data":"1af8e6949a73afc476d7ae11d9b9c636f460de4b153f0c9de12e68ad764338ac"} Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.967647 4982 scope.go:117] "RemoveContainer" containerID="fcfadd48e7a3a6ae0ee1992816b4229db009aa90d29fc9eaf88332ee13d48464" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.989489 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-xr68t" podStartSLOduration=1.98946141 podStartE2EDuration="1.98946141s" podCreationTimestamp="2026-02-24 15:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:07.979357075 +0000 UTC m=+1389.598415568" watchObservedRunningTime="2026-02-24 15:12:07.98946141 +0000 UTC m=+1389.608519903" Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.993326 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" event={"ID":"75ea04fc-ab4f-446c-a3f2-2e82f387068b","Type":"ContainerDied","Data":"683704ac750c6a8abd01d8aa042c58a406fd1dca113f6ff6230dc4a06261ec32"} Feb 24 15:12:07 crc kubenswrapper[4982]: I0224 15:12:07.993407 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-sn7zc" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.003264 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f120-account-create-update-7rgzv" event={"ID":"dec3ae4f-cea7-465c-b582-d075b477d5d0","Type":"ContainerStarted","Data":"21c21d397aa948744692f6623a1e0ca215bd848d0946acd33a68455ae2741bf1"} Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.003594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f120-account-create-update-7rgzv" event={"ID":"dec3ae4f-cea7-465c-b582-d075b477d5d0","Type":"ContainerStarted","Data":"df1cfa78d452f50c8bd3ad726d3937d65ea03d53788a1e8891dcb283ae2a3ad8"} Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.038770 4982 scope.go:117] "RemoveContainer" containerID="870b4e921ea87a79cf0350f0272a1310633ce47c2f136c16eea306184367d370" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.041946 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-544ccdb57f-qqrjk"] Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.054134 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-544ccdb57f-qqrjk"] Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.067261 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f120-account-create-update-7rgzv" podStartSLOduration=2.067235904 podStartE2EDuration="2.067235904s" podCreationTimestamp="2026-02-24 15:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:08.042286406 +0000 UTC m=+1389.661344899" watchObservedRunningTime="2026-02-24 15:12:08.067235904 +0000 UTC m=+1389.686294397" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.082058 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-sn7zc"] Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.088863 4982 scope.go:117] "RemoveContainer" containerID="b82c495e97d79754ad8216c617b6a9723ed1c120407cd3206a9d9e7722ae4e4d" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.092148 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-sn7zc"] Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.583531 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr"] Feb 24 15:12:08 crc kubenswrapper[4982]: E0224 15:12:08.583921 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8725b3ed-7ded-4914-9d51-4f91d7901929" containerName="mariadb-account-create-update" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.583939 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8725b3ed-7ded-4914-9d51-4f91d7901929" containerName="mariadb-account-create-update" Feb 24 15:12:08 crc kubenswrapper[4982]: E0224 15:12:08.583956 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" containerName="dnsmasq-dns" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.583964 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" containerName="dnsmasq-dns" Feb 24 15:12:08 crc kubenswrapper[4982]: E0224 15:12:08.583986 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e68d2ce-821e-4070-affc-79adbc1a34ca" containerName="oc" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.583992 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e68d2ce-821e-4070-affc-79adbc1a34ca" containerName="oc" Feb 24 15:12:08 crc kubenswrapper[4982]: E0224 15:12:08.584003 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a438e-edd4-4552-bfc2-0989f944b710" containerName="console" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.584008 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a438e-edd4-4552-bfc2-0989f944b710" containerName="console" Feb 24 15:12:08 crc kubenswrapper[4982]: E0224 15:12:08.584024 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" containerName="init" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.584030 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" containerName="init" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.584200 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e68d2ce-821e-4070-affc-79adbc1a34ca" containerName="oc" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.584217 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8725b3ed-7ded-4914-9d51-4f91d7901929" containerName="mariadb-account-create-update" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.584227 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12a438e-edd4-4552-bfc2-0989f944b710" containerName="console" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.584241 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" containerName="dnsmasq-dns" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.584902 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.600639 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr"] Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.638380 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcfxp\" (UniqueName: \"kubernetes.io/projected/a91ac633-b13c-47df-baf7-eca2759210fa-kube-api-access-kcfxp\") pod \"mysqld-exporter-openstack-cell1-db-create-4nnfr\" (UID: \"a91ac633-b13c-47df-baf7-eca2759210fa\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.638728 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91ac633-b13c-47df-baf7-eca2759210fa-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4nnfr\" (UID: \"a91ac633-b13c-47df-baf7-eca2759210fa\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.740839 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcfxp\" (UniqueName: \"kubernetes.io/projected/a91ac633-b13c-47df-baf7-eca2759210fa-kube-api-access-kcfxp\") pod \"mysqld-exporter-openstack-cell1-db-create-4nnfr\" (UID: \"a91ac633-b13c-47df-baf7-eca2759210fa\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.741006 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91ac633-b13c-47df-baf7-eca2759210fa-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4nnfr\" (UID: \"a91ac633-b13c-47df-baf7-eca2759210fa\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.741888 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91ac633-b13c-47df-baf7-eca2759210fa-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-4nnfr\" (UID: \"a91ac633-b13c-47df-baf7-eca2759210fa\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.777614 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcfxp\" (UniqueName: \"kubernetes.io/projected/a91ac633-b13c-47df-baf7-eca2759210fa-kube-api-access-kcfxp\") pod \"mysqld-exporter-openstack-cell1-db-create-4nnfr\" (UID: \"a91ac633-b13c-47df-baf7-eca2759210fa\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.796714 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-46cd-account-create-update-q4m5c"] Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.798126 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.802516 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.808104 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-46cd-account-create-update-q4m5c"] Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.842580 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25f12dc6-7718-4641-9b56-85236953f8e2-operator-scripts\") pod \"mysqld-exporter-46cd-account-create-update-q4m5c\" (UID: \"25f12dc6-7718-4641-9b56-85236953f8e2\") " pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.843340 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/25f12dc6-7718-4641-9b56-85236953f8e2-kube-api-access-b86nz\") pod \"mysqld-exporter-46cd-account-create-update-q4m5c\" (UID: \"25f12dc6-7718-4641-9b56-85236953f8e2\") " pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.907558 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.945216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25f12dc6-7718-4641-9b56-85236953f8e2-operator-scripts\") pod \"mysqld-exporter-46cd-account-create-update-q4m5c\" (UID: \"25f12dc6-7718-4641-9b56-85236953f8e2\") " pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.945423 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/25f12dc6-7718-4641-9b56-85236953f8e2-kube-api-access-b86nz\") pod \"mysqld-exporter-46cd-account-create-update-q4m5c\" (UID: \"25f12dc6-7718-4641-9b56-85236953f8e2\") " pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.946285 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25f12dc6-7718-4641-9b56-85236953f8e2-operator-scripts\") pod \"mysqld-exporter-46cd-account-create-update-q4m5c\" (UID: \"25f12dc6-7718-4641-9b56-85236953f8e2\") " pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:08 crc kubenswrapper[4982]: I0224 15:12:08.974091 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/25f12dc6-7718-4641-9b56-85236953f8e2-kube-api-access-b86nz\") pod \"mysqld-exporter-46cd-account-create-update-q4m5c\" (UID: \"25f12dc6-7718-4641-9b56-85236953f8e2\") " pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.017610 4982 generic.go:334] "Generic (PLEG): container finished" podID="dec3ae4f-cea7-465c-b582-d075b477d5d0" containerID="21c21d397aa948744692f6623a1e0ca215bd848d0946acd33a68455ae2741bf1" exitCode=0 Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.017662 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f120-account-create-update-7rgzv" event={"ID":"dec3ae4f-cea7-465c-b582-d075b477d5d0","Type":"ContainerDied","Data":"21c21d397aa948744692f6623a1e0ca215bd848d0946acd33a68455ae2741bf1"} Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.020372 4982 generic.go:334] "Generic (PLEG): container finished" podID="94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5" containerID="7fd97204f45901caa85fdf30221e98d8d0d49a7113d2558839e59f70ab081898" exitCode=0 Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.020419 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xr68t" event={"ID":"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5","Type":"ContainerDied","Data":"7fd97204f45901caa85fdf30221e98d8d0d49a7113d2558839e59f70ab081898"} Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.139138 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.160655 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ea04fc-ab4f-446c-a3f2-2e82f387068b" path="/var/lib/kubelet/pods/75ea04fc-ab4f-446c-a3f2-2e82f387068b/volumes" Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.161450 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12a438e-edd4-4552-bfc2-0989f944b710" path="/var/lib/kubelet/pods/a12a438e-edd4-4552-bfc2-0989f944b710/volumes" Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.319793 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qzglz"] Feb 24 15:12:09 crc kubenswrapper[4982]: I0224 15:12:09.331562 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qzglz"] Feb 24 15:12:10 crc kubenswrapper[4982]: I0224 15:12:10.036346 4982 generic.go:334] "Generic (PLEG): container finished" podID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerID="eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4" exitCode=0 Feb 24 15:12:10 crc kubenswrapper[4982]: I0224 15:12:10.036446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4","Type":"ContainerDied","Data":"eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4"} Feb 24 15:12:10 crc kubenswrapper[4982]: I0224 15:12:10.038538 4982 generic.go:334] "Generic (PLEG): container finished" podID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerID="ca9e4f5ce81b1272724e6bbc4e2d1009e86f074a2f97121639328c4ceafcb76f" exitCode=0 Feb 24 15:12:10 crc kubenswrapper[4982]: I0224 15:12:10.038627 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6","Type":"ContainerDied","Data":"ca9e4f5ce81b1272724e6bbc4e2d1009e86f074a2f97121639328c4ceafcb76f"} Feb 24 15:12:11 crc kubenswrapper[4982]: I0224 15:12:11.059609 4982 generic.go:334] "Generic (PLEG): container finished" podID="513f6549-901c-4faf-9011-af95fe7398ae" containerID="d6fc08d870d7fad19cfccc0848ad3a2332a787597366253b2f0903bce152b284" exitCode=0 Feb 24 15:12:11 crc kubenswrapper[4982]: I0224 15:12:11.059728 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"513f6549-901c-4faf-9011-af95fe7398ae","Type":"ContainerDied","Data":"d6fc08d870d7fad19cfccc0848ad3a2332a787597366253b2f0903bce152b284"} Feb 24 15:12:11 crc kubenswrapper[4982]: I0224 15:12:11.065705 4982 generic.go:334] "Generic (PLEG): container finished" podID="e5321445-9e2b-44c7-9975-2bfe929ead53" containerID="5f301a72a2744fd87eabab57f55b3e0ef90d45db1e404cb3782a864c338f7db7" exitCode=0 Feb 24 15:12:11 crc kubenswrapper[4982]: I0224 15:12:11.065753 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwcwd" event={"ID":"e5321445-9e2b-44c7-9975-2bfe929ead53","Type":"ContainerDied","Data":"5f301a72a2744fd87eabab57f55b3e0ef90d45db1e404cb3782a864c338f7db7"} Feb 24 15:12:11 crc kubenswrapper[4982]: I0224 15:12:11.157108 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8725b3ed-7ded-4914-9d51-4f91d7901929" path="/var/lib/kubelet/pods/8725b3ed-7ded-4914-9d51-4f91d7901929/volumes" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.243441 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xr68t" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.280721 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-operator-scripts\") pod \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\" (UID: \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.281025 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8w9s\" (UniqueName: \"kubernetes.io/projected/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-kube-api-access-f8w9s\") pod \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\" (UID: \"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.281489 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5" (UID: "94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.281902 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.293108 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-kube-api-access-f8w9s" (OuterVolumeSpecName: "kube-api-access-f8w9s") pod "94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5" (UID: "94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5"). InnerVolumeSpecName "kube-api-access-f8w9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.384573 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8w9s\" (UniqueName: \"kubernetes.io/projected/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5-kube-api-access-f8w9s\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.449301 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.485596 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec3ae4f-cea7-465c-b582-d075b477d5d0-operator-scripts\") pod \"dec3ae4f-cea7-465c-b582-d075b477d5d0\" (UID: \"dec3ae4f-cea7-465c-b582-d075b477d5d0\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.485746 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flnx5\" (UniqueName: \"kubernetes.io/projected/dec3ae4f-cea7-465c-b582-d075b477d5d0-kube-api-access-flnx5\") pod \"dec3ae4f-cea7-465c-b582-d075b477d5d0\" (UID: \"dec3ae4f-cea7-465c-b582-d075b477d5d0\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.486396 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec3ae4f-cea7-465c-b582-d075b477d5d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dec3ae4f-cea7-465c-b582-d075b477d5d0" (UID: "dec3ae4f-cea7-465c-b582-d075b477d5d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.494846 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec3ae4f-cea7-465c-b582-d075b477d5d0-kube-api-access-flnx5" (OuterVolumeSpecName: "kube-api-access-flnx5") pod "dec3ae4f-cea7-465c-b582-d075b477d5d0" (UID: "dec3ae4f-cea7-465c-b582-d075b477d5d0"). InnerVolumeSpecName "kube-api-access-flnx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.543328 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.587561 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4ldp\" (UniqueName: \"kubernetes.io/projected/e5321445-9e2b-44c7-9975-2bfe929ead53-kube-api-access-j4ldp\") pod \"e5321445-9e2b-44c7-9975-2bfe929ead53\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.587657 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-ring-data-devices\") pod \"e5321445-9e2b-44c7-9975-2bfe929ead53\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.587789 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-dispersionconf\") pod \"e5321445-9e2b-44c7-9975-2bfe929ead53\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.587876 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-scripts\") pod \"e5321445-9e2b-44c7-9975-2bfe929ead53\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.587919 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5321445-9e2b-44c7-9975-2bfe929ead53-etc-swift\") pod \"e5321445-9e2b-44c7-9975-2bfe929ead53\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.587969 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-combined-ca-bundle\") pod \"e5321445-9e2b-44c7-9975-2bfe929ead53\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.588011 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-swiftconf\") pod \"e5321445-9e2b-44c7-9975-2bfe929ead53\" (UID: \"e5321445-9e2b-44c7-9975-2bfe929ead53\") " Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.588776 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec3ae4f-cea7-465c-b582-d075b477d5d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.588820 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flnx5\" (UniqueName: \"kubernetes.io/projected/dec3ae4f-cea7-465c-b582-d075b477d5d0-kube-api-access-flnx5\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.593663 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e5321445-9e2b-44c7-9975-2bfe929ead53" (UID: "e5321445-9e2b-44c7-9975-2bfe929ead53"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.593804 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5321445-9e2b-44c7-9975-2bfe929ead53-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e5321445-9e2b-44c7-9975-2bfe929ead53" (UID: "e5321445-9e2b-44c7-9975-2bfe929ead53"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.599636 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5321445-9e2b-44c7-9975-2bfe929ead53-kube-api-access-j4ldp" (OuterVolumeSpecName: "kube-api-access-j4ldp") pod "e5321445-9e2b-44c7-9975-2bfe929ead53" (UID: "e5321445-9e2b-44c7-9975-2bfe929ead53"). InnerVolumeSpecName "kube-api-access-j4ldp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.625968 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e5321445-9e2b-44c7-9975-2bfe929ead53" (UID: "e5321445-9e2b-44c7-9975-2bfe929ead53"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.633599 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-scripts" (OuterVolumeSpecName: "scripts") pod "e5321445-9e2b-44c7-9975-2bfe929ead53" (UID: "e5321445-9e2b-44c7-9975-2bfe929ead53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.648373 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5321445-9e2b-44c7-9975-2bfe929ead53" (UID: "e5321445-9e2b-44c7-9975-2bfe929ead53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.672384 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e5321445-9e2b-44c7-9975-2bfe929ead53" (UID: "e5321445-9e2b-44c7-9975-2bfe929ead53"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.691147 4982 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.691460 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4ldp\" (UniqueName: \"kubernetes.io/projected/e5321445-9e2b-44c7-9975-2bfe929ead53-kube-api-access-j4ldp\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.691553 4982 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.691621 4982 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.691684 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5321445-9e2b-44c7-9975-2bfe929ead53-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.691746 4982 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e5321445-9e2b-44c7-9975-2bfe929ead53-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.691809 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5321445-9e2b-44c7-9975-2bfe929ead53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.801680 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-46cd-account-create-update-q4m5c"] Feb 24 15:12:12 crc kubenswrapper[4982]: W0224 15:12:12.812416 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25f12dc6_7718_4641_9b56_85236953f8e2.slice/crio-48b68a2d5ccfe7ec6f9d04841f8752954825e895d81765619b26bd0aa11914d0 WatchSource:0}: Error finding container 48b68a2d5ccfe7ec6f9d04841f8752954825e895d81765619b26bd0aa11914d0: Status 404 returned error can't find the container with id 48b68a2d5ccfe7ec6f9d04841f8752954825e895d81765619b26bd0aa11914d0 Feb 24 15:12:12 crc kubenswrapper[4982]: I0224 15:12:12.939217 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr"] Feb 24 15:12:12 crc kubenswrapper[4982]: W0224 15:12:12.949383 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda91ac633_b13c_47df_baf7_eca2759210fa.slice/crio-1003cc83a8d324c02fee420bcacddf3d39c7e9a356db88d863ab701c4971bcb0 WatchSource:0}: Error finding container 1003cc83a8d324c02fee420bcacddf3d39c7e9a356db88d863ab701c4971bcb0: Status 404 returned error can't find the container with id 1003cc83a8d324c02fee420bcacddf3d39c7e9a356db88d863ab701c4971bcb0 Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.107992 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f120-account-create-update-7rgzv" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.107984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f120-account-create-update-7rgzv" event={"ID":"dec3ae4f-cea7-465c-b582-d075b477d5d0","Type":"ContainerDied","Data":"df1cfa78d452f50c8bd3ad726d3937d65ea03d53788a1e8891dcb283ae2a3ad8"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.108572 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df1cfa78d452f50c8bd3ad726d3937d65ea03d53788a1e8891dcb283ae2a3ad8" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.111097 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4","Type":"ContainerStarted","Data":"d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.111318 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.113306 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xr68t" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.113350 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xr68t" event={"ID":"94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5","Type":"ContainerDied","Data":"6ccc996eff47479fc24d3f3d487f0bfb466564f85612232351aacf5ee797f4f2"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.113379 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccc996eff47479fc24d3f3d487f0bfb466564f85612232351aacf5ee797f4f2" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.116013 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"513f6549-901c-4faf-9011-af95fe7398ae","Type":"ContainerStarted","Data":"c2aff8a7f1dc2f2006eea95379acfeffb3dc9d305e76fa39550d666641993f24"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.116202 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.118556 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwcwd" event={"ID":"e5321445-9e2b-44c7-9975-2bfe929ead53","Type":"ContainerDied","Data":"2a780370c69889f9e433676fd55ea6c5a02ed9d385306a4ce4ce6d37d32bd641"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.118576 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a780370c69889f9e433676fd55ea6c5a02ed9d385306a4ce4ce6d37d32bd641" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.118749 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwcwd" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.121891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerStarted","Data":"5349a93da7cca0fcf40782299908c413ca3f436e19a045d64bb97189139516b5"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.124999 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6","Type":"ContainerStarted","Data":"60cbd02be2e19922f7155b97a44cb40ea052b45a59a5b3d9adb2b51e3945fad6"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.125941 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.130028 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" event={"ID":"25f12dc6-7718-4641-9b56-85236953f8e2","Type":"ContainerStarted","Data":"4d89f75122c102bbdbc2352cbb7d69a9d8a3c57945159d7da1012638cdb59262"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.130141 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" event={"ID":"25f12dc6-7718-4641-9b56-85236953f8e2","Type":"ContainerStarted","Data":"48b68a2d5ccfe7ec6f9d04841f8752954825e895d81765619b26bd0aa11914d0"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.144420 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" event={"ID":"a91ac633-b13c-47df-baf7-eca2759210fa","Type":"ContainerStarted","Data":"1003cc83a8d324c02fee420bcacddf3d39c7e9a356db88d863ab701c4971bcb0"} Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.178461 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=42.572361231 podStartE2EDuration="1m12.178442191s" podCreationTimestamp="2026-02-24 15:11:01 +0000 UTC" firstStartedPulling="2026-02-24 15:11:03.885571008 +0000 UTC m=+1325.504629501" lastFinishedPulling="2026-02-24 15:11:33.491651968 +0000 UTC m=+1355.110710461" observedRunningTime="2026-02-24 15:12:13.165712455 +0000 UTC m=+1394.784770978" watchObservedRunningTime="2026-02-24 15:12:13.178442191 +0000 UTC m=+1394.797500684" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.226574 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" podStartSLOduration=5.226555828 podStartE2EDuration="5.226555828s" podCreationTimestamp="2026-02-24 15:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:13.221610264 +0000 UTC m=+1394.840668757" watchObservedRunningTime="2026-02-24 15:12:13.226555828 +0000 UTC m=+1394.845614321" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.236288 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371964.618511 podStartE2EDuration="1m12.236264722s" podCreationTimestamp="2026-02-24 15:11:01 +0000 UTC" firstStartedPulling="2026-02-24 15:11:03.518767128 +0000 UTC m=+1325.137825621" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:13.203110791 +0000 UTC m=+1394.822169304" watchObservedRunningTime="2026-02-24 15:12:13.236264722 +0000 UTC m=+1394.855323215" Feb 24 15:12:13 crc kubenswrapper[4982]: I0224 15:12:13.256670 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371964.598122 podStartE2EDuration="1m12.256654487s" podCreationTimestamp="2026-02-24 15:11:01 +0000 UTC" firstStartedPulling="2026-02-24 15:11:04.006445587 +0000 UTC m=+1325.625504080" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:13.253697296 +0000 UTC m=+1394.872755789" watchObservedRunningTime="2026-02-24 15:12:13.256654487 +0000 UTC m=+1394.875712970" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.154415 4982 generic.go:334] "Generic (PLEG): container finished" podID="25f12dc6-7718-4641-9b56-85236953f8e2" containerID="4d89f75122c102bbdbc2352cbb7d69a9d8a3c57945159d7da1012638cdb59262" exitCode=0 Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.154524 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" event={"ID":"25f12dc6-7718-4641-9b56-85236953f8e2","Type":"ContainerDied","Data":"4d89f75122c102bbdbc2352cbb7d69a9d8a3c57945159d7da1012638cdb59262"} Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.157035 4982 generic.go:334] "Generic (PLEG): container finished" podID="a91ac633-b13c-47df-baf7-eca2759210fa" containerID="f72ede8de238ef96f0e0f57f8ef1e6c315e66f30450043c5a0e50d039ff5451b" exitCode=0 Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.157126 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" event={"ID":"a91ac633-b13c-47df-baf7-eca2759210fa","Type":"ContainerDied","Data":"f72ede8de238ef96f0e0f57f8ef1e6c315e66f30450043c5a0e50d039ff5451b"} Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.368034 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9r4vk"] Feb 24 15:12:14 crc kubenswrapper[4982]: E0224 15:12:14.368541 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec3ae4f-cea7-465c-b582-d075b477d5d0" containerName="mariadb-account-create-update" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.368560 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec3ae4f-cea7-465c-b582-d075b477d5d0" containerName="mariadb-account-create-update" Feb 24 15:12:14 crc kubenswrapper[4982]: E0224 15:12:14.368587 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5321445-9e2b-44c7-9975-2bfe929ead53" containerName="swift-ring-rebalance" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.368596 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5321445-9e2b-44c7-9975-2bfe929ead53" containerName="swift-ring-rebalance" Feb 24 15:12:14 crc kubenswrapper[4982]: E0224 15:12:14.368613 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5" containerName="mariadb-database-create" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.368620 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5" containerName="mariadb-database-create" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.368875 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5" containerName="mariadb-database-create" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.368903 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5321445-9e2b-44c7-9975-2bfe929ead53" containerName="swift-ring-rebalance" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.368917 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec3ae4f-cea7-465c-b582-d075b477d5d0" containerName="mariadb-account-create-update" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.369773 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.371649 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.384147 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9r4vk"] Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.433367 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668ccd5d-5607-4cb2-832c-54aa8960ff2b-operator-scripts\") pod \"root-account-create-update-9r4vk\" (UID: \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\") " pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.433467 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2nbb\" (UniqueName: \"kubernetes.io/projected/668ccd5d-5607-4cb2-832c-54aa8960ff2b-kube-api-access-m2nbb\") pod \"root-account-create-update-9r4vk\" (UID: \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\") " pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.535929 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668ccd5d-5607-4cb2-832c-54aa8960ff2b-operator-scripts\") pod \"root-account-create-update-9r4vk\" (UID: \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\") " pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.536056 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2nbb\" (UniqueName: \"kubernetes.io/projected/668ccd5d-5607-4cb2-832c-54aa8960ff2b-kube-api-access-m2nbb\") pod \"root-account-create-update-9r4vk\" (UID: \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\") " pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.536809 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668ccd5d-5607-4cb2-832c-54aa8960ff2b-operator-scripts\") pod \"root-account-create-update-9r4vk\" (UID: \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\") " pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.554404 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2nbb\" (UniqueName: \"kubernetes.io/projected/668ccd5d-5607-4cb2-832c-54aa8960ff2b-kube-api-access-m2nbb\") pod \"root-account-create-update-9r4vk\" (UID: \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\") " pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:14 crc kubenswrapper[4982]: I0224 15:12:14.686647 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.221110 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9r4vk"] Feb 24 15:12:15 crc kubenswrapper[4982]: W0224 15:12:15.237764 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod668ccd5d_5607_4cb2_832c_54aa8960ff2b.slice/crio-f285c3705f5a8687c6d51d3fd7723de3ab8f5ba849da5239b577eb9f4b2681b3 WatchSource:0}: Error finding container f285c3705f5a8687c6d51d3fd7723de3ab8f5ba849da5239b577eb9f4b2681b3: Status 404 returned error can't find the container with id f285c3705f5a8687c6d51d3fd7723de3ab8f5ba849da5239b577eb9f4b2681b3 Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.435546 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.801005 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.808195 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.863450 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/25f12dc6-7718-4641-9b56-85236953f8e2-kube-api-access-b86nz\") pod \"25f12dc6-7718-4641-9b56-85236953f8e2\" (UID: \"25f12dc6-7718-4641-9b56-85236953f8e2\") " Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.863526 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25f12dc6-7718-4641-9b56-85236953f8e2-operator-scripts\") pod \"25f12dc6-7718-4641-9b56-85236953f8e2\" (UID: \"25f12dc6-7718-4641-9b56-85236953f8e2\") " Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.863646 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcfxp\" (UniqueName: \"kubernetes.io/projected/a91ac633-b13c-47df-baf7-eca2759210fa-kube-api-access-kcfxp\") pod \"a91ac633-b13c-47df-baf7-eca2759210fa\" (UID: \"a91ac633-b13c-47df-baf7-eca2759210fa\") " Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.863753 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91ac633-b13c-47df-baf7-eca2759210fa-operator-scripts\") pod \"a91ac633-b13c-47df-baf7-eca2759210fa\" (UID: \"a91ac633-b13c-47df-baf7-eca2759210fa\") " Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.863942 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f12dc6-7718-4641-9b56-85236953f8e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25f12dc6-7718-4641-9b56-85236953f8e2" (UID: "25f12dc6-7718-4641-9b56-85236953f8e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.864274 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91ac633-b13c-47df-baf7-eca2759210fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a91ac633-b13c-47df-baf7-eca2759210fa" (UID: "a91ac633-b13c-47df-baf7-eca2759210fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.864383 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a91ac633-b13c-47df-baf7-eca2759210fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.864408 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25f12dc6-7718-4641-9b56-85236953f8e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.869014 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f12dc6-7718-4641-9b56-85236953f8e2-kube-api-access-b86nz" (OuterVolumeSpecName: "kube-api-access-b86nz") pod "25f12dc6-7718-4641-9b56-85236953f8e2" (UID: "25f12dc6-7718-4641-9b56-85236953f8e2"). InnerVolumeSpecName "kube-api-access-b86nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.876392 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91ac633-b13c-47df-baf7-eca2759210fa-kube-api-access-kcfxp" (OuterVolumeSpecName: "kube-api-access-kcfxp") pod "a91ac633-b13c-47df-baf7-eca2759210fa" (UID: "a91ac633-b13c-47df-baf7-eca2759210fa"). InnerVolumeSpecName "kube-api-access-kcfxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.966088 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcfxp\" (UniqueName: \"kubernetes.io/projected/a91ac633-b13c-47df-baf7-eca2759210fa-kube-api-access-kcfxp\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:15 crc kubenswrapper[4982]: I0224 15:12:15.966520 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/25f12dc6-7718-4641-9b56-85236953f8e2-kube-api-access-b86nz\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.091929 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xcvbd" podUID="2c21939f-82d2-4553-acbd-b570e4d1527c" containerName="ovn-controller" probeResult="failure" output=< Feb 24 15:12:16 crc kubenswrapper[4982]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 24 15:12:16 crc kubenswrapper[4982]: > Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.136404 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.145582 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gsdm4" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.187693 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9r4vk" event={"ID":"668ccd5d-5607-4cb2-832c-54aa8960ff2b","Type":"ContainerStarted","Data":"f285c3705f5a8687c6d51d3fd7723de3ab8f5ba849da5239b577eb9f4b2681b3"} Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.189274 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.189287 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-46cd-account-create-update-q4m5c" event={"ID":"25f12dc6-7718-4641-9b56-85236953f8e2","Type":"ContainerDied","Data":"48b68a2d5ccfe7ec6f9d04841f8752954825e895d81765619b26bd0aa11914d0"} Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.189327 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48b68a2d5ccfe7ec6f9d04841f8752954825e895d81765619b26bd0aa11914d0" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.190854 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.190849 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr" event={"ID":"a91ac633-b13c-47df-baf7-eca2759210fa","Type":"ContainerDied","Data":"1003cc83a8d324c02fee420bcacddf3d39c7e9a356db88d863ab701c4971bcb0"} Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.190904 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1003cc83a8d324c02fee420bcacddf3d39c7e9a356db88d863ab701c4971bcb0" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.402859 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xcvbd-config-97zww"] Feb 24 15:12:16 crc kubenswrapper[4982]: E0224 15:12:16.403599 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91ac633-b13c-47df-baf7-eca2759210fa" containerName="mariadb-database-create" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.403692 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91ac633-b13c-47df-baf7-eca2759210fa" containerName="mariadb-database-create" Feb 24 15:12:16 crc kubenswrapper[4982]: E0224 15:12:16.403759 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f12dc6-7718-4641-9b56-85236953f8e2" containerName="mariadb-account-create-update" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.403814 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f12dc6-7718-4641-9b56-85236953f8e2" containerName="mariadb-account-create-update" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.404059 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91ac633-b13c-47df-baf7-eca2759210fa" containerName="mariadb-database-create" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.404129 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f12dc6-7718-4641-9b56-85236953f8e2" containerName="mariadb-account-create-update" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.404988 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.408016 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.417616 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xcvbd-config-97zww"] Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.480770 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run-ovn\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.481956 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-scripts\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.482072 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.482192 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-log-ovn\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.482287 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-additional-scripts\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.482465 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfgk\" (UniqueName: \"kubernetes.io/projected/45908301-2616-42a0-b3cc-3e943152efe4-kube-api-access-nkfgk\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.584652 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfgk\" (UniqueName: \"kubernetes.io/projected/45908301-2616-42a0-b3cc-3e943152efe4-kube-api-access-nkfgk\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.584756 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run-ovn\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.584778 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-scripts\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.584796 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.584842 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-log-ovn\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.584874 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-additional-scripts\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.585457 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run-ovn\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.585479 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-log-ovn\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.585633 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.585853 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-additional-scripts\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.587917 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-scripts\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.604899 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkfgk\" (UniqueName: \"kubernetes.io/projected/45908301-2616-42a0-b3cc-3e943152efe4-kube-api-access-nkfgk\") pod \"ovn-controller-xcvbd-config-97zww\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.770188 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.876607 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-82562"] Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.878232 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-82562" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.889039 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7m87m" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.889250 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.904342 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-82562"] Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.994808 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-db-sync-config-data\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.995301 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-config-data\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.995364 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-combined-ca-bundle\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:16 crc kubenswrapper[4982]: I0224 15:12:16.995418 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbpzc\" (UniqueName: \"kubernetes.io/projected/1ed4027c-742c-4789-9fad-fc912c419d6d-kube-api-access-wbpzc\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.097210 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-db-sync-config-data\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.097291 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-config-data\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.097338 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-combined-ca-bundle\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.097385 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbpzc\" (UniqueName: \"kubernetes.io/projected/1ed4027c-742c-4789-9fad-fc912c419d6d-kube-api-access-wbpzc\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.108629 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-config-data\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.109251 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-combined-ca-bundle\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.109912 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-db-sync-config-data\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.113195 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbpzc\" (UniqueName: \"kubernetes.io/projected/1ed4027c-742c-4789-9fad-fc912c419d6d-kube-api-access-wbpzc\") pod \"glance-db-sync-82562\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.202257 4982 generic.go:334] "Generic (PLEG): container finished" podID="511c8aa0-4327-455c-8caa-66bc442d199f" containerID="9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179" exitCode=0 Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.202342 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"511c8aa0-4327-455c-8caa-66bc442d199f","Type":"ContainerDied","Data":"9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179"} Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.204293 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9r4vk" event={"ID":"668ccd5d-5607-4cb2-832c-54aa8960ff2b","Type":"ContainerStarted","Data":"032fba7567c1016ef236ea5ea23e744188a41d03457e9153b5a72554c6770d7e"} Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.210440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerStarted","Data":"98f4c3bdec693be3c46bee686d4b453fc1e579e7724bbeee85b529a7e9c9188b"} Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.248831 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-82562" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.321772 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9r4vk" podStartSLOduration=3.321748408 podStartE2EDuration="3.321748408s" podCreationTimestamp="2026-02-24 15:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:17.300074648 +0000 UTC m=+1398.919133141" watchObservedRunningTime="2026-02-24 15:12:17.321748408 +0000 UTC m=+1398.940806901" Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.416936 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xcvbd-config-97zww"] Feb 24 15:12:17 crc kubenswrapper[4982]: I0224 15:12:17.993217 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-82562"] Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.222862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-82562" event={"ID":"1ed4027c-742c-4789-9fad-fc912c419d6d","Type":"ContainerStarted","Data":"e533b035dadcb6b61b7ecc2e7c919b23e1592e099bf1858b5d105080c02678bb"} Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.224873 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xcvbd-config-97zww" event={"ID":"45908301-2616-42a0-b3cc-3e943152efe4","Type":"ContainerStarted","Data":"31c0c7d0d28c8daf9dd2d3094371c0258a38e09f507b0d296ee16fcdf9fab36a"} Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.224901 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xcvbd-config-97zww" event={"ID":"45908301-2616-42a0-b3cc-3e943152efe4","Type":"ContainerStarted","Data":"19dfb7edc226d171a12cf56810baba3a4a84fbca32edbdcd403fae25950ecf84"} Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.237918 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"511c8aa0-4327-455c-8caa-66bc442d199f","Type":"ContainerStarted","Data":"f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53"} Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.238322 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.240698 4982 generic.go:334] "Generic (PLEG): container finished" podID="668ccd5d-5607-4cb2-832c-54aa8960ff2b" containerID="032fba7567c1016ef236ea5ea23e744188a41d03457e9153b5a72554c6770d7e" exitCode=0 Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.240761 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9r4vk" event={"ID":"668ccd5d-5607-4cb2-832c-54aa8960ff2b","Type":"ContainerDied","Data":"032fba7567c1016ef236ea5ea23e744188a41d03457e9153b5a72554c6770d7e"} Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.250930 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xcvbd-config-97zww" podStartSLOduration=2.250906412 podStartE2EDuration="2.250906412s" podCreationTimestamp="2026-02-24 15:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:18.244016785 +0000 UTC m=+1399.863075288" watchObservedRunningTime="2026-02-24 15:12:18.250906412 +0000 UTC m=+1399.869964905" Feb 24 15:12:18 crc kubenswrapper[4982]: I0224 15:12:18.303928 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371959.550869 podStartE2EDuration="1m17.303907363s" podCreationTimestamp="2026-02-24 15:11:01 +0000 UTC" firstStartedPulling="2026-02-24 15:11:03.520605431 +0000 UTC m=+1325.139663914" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:18.288304549 +0000 UTC m=+1399.907363042" watchObservedRunningTime="2026-02-24 15:12:18.303907363 +0000 UTC m=+1399.922965856" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.009921 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.012013 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.015536 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.030192 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.154250 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-config-data\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.154304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wpzx\" (UniqueName: \"kubernetes.io/projected/32995058-5676-4cbd-9df5-92cd2ed06ff7-kube-api-access-8wpzx\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.154592 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.256328 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-config-data\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.256402 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wpzx\" (UniqueName: \"kubernetes.io/projected/32995058-5676-4cbd-9df5-92cd2ed06ff7-kube-api-access-8wpzx\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.256556 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.265743 4982 generic.go:334] "Generic (PLEG): container finished" podID="45908301-2616-42a0-b3cc-3e943152efe4" containerID="31c0c7d0d28c8daf9dd2d3094371c0258a38e09f507b0d296ee16fcdf9fab36a" exitCode=0 Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.266152 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.266191 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xcvbd-config-97zww" event={"ID":"45908301-2616-42a0-b3cc-3e943152efe4","Type":"ContainerDied","Data":"31c0c7d0d28c8daf9dd2d3094371c0258a38e09f507b0d296ee16fcdf9fab36a"} Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.277258 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-config-data\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.296191 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wpzx\" (UniqueName: \"kubernetes.io/projected/32995058-5676-4cbd-9df5-92cd2ed06ff7-kube-api-access-8wpzx\") pod \"mysqld-exporter-0\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.351939 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.781384 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:19 crc kubenswrapper[4982]: W0224 15:12:19.923992 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32995058_5676_4cbd_9df5_92cd2ed06ff7.slice/crio-d85b99689138a456df72650d99ec67f0aa3bb650c419ae3c46bda378863d6349 WatchSource:0}: Error finding container d85b99689138a456df72650d99ec67f0aa3bb650c419ae3c46bda378863d6349: Status 404 returned error can't find the container with id d85b99689138a456df72650d99ec67f0aa3bb650c419ae3c46bda378863d6349 Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.925483 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.972440 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668ccd5d-5607-4cb2-832c-54aa8960ff2b-operator-scripts\") pod \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\" (UID: \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\") " Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.972684 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2nbb\" (UniqueName: \"kubernetes.io/projected/668ccd5d-5607-4cb2-832c-54aa8960ff2b-kube-api-access-m2nbb\") pod \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\" (UID: \"668ccd5d-5607-4cb2-832c-54aa8960ff2b\") " Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.973945 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668ccd5d-5607-4cb2-832c-54aa8960ff2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "668ccd5d-5607-4cb2-832c-54aa8960ff2b" (UID: "668ccd5d-5607-4cb2-832c-54aa8960ff2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:19 crc kubenswrapper[4982]: I0224 15:12:19.977230 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668ccd5d-5607-4cb2-832c-54aa8960ff2b-kube-api-access-m2nbb" (OuterVolumeSpecName: "kube-api-access-m2nbb") pod "668ccd5d-5607-4cb2-832c-54aa8960ff2b" (UID: "668ccd5d-5607-4cb2-832c-54aa8960ff2b"). InnerVolumeSpecName "kube-api-access-m2nbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:20 crc kubenswrapper[4982]: I0224 15:12:20.075009 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2nbb\" (UniqueName: \"kubernetes.io/projected/668ccd5d-5607-4cb2-832c-54aa8960ff2b-kube-api-access-m2nbb\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:20 crc kubenswrapper[4982]: I0224 15:12:20.075051 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668ccd5d-5607-4cb2-832c-54aa8960ff2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:20 crc kubenswrapper[4982]: I0224 15:12:20.305274 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9r4vk" Feb 24 15:12:20 crc kubenswrapper[4982]: I0224 15:12:20.305283 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9r4vk" event={"ID":"668ccd5d-5607-4cb2-832c-54aa8960ff2b","Type":"ContainerDied","Data":"f285c3705f5a8687c6d51d3fd7723de3ab8f5ba849da5239b577eb9f4b2681b3"} Feb 24 15:12:20 crc kubenswrapper[4982]: I0224 15:12:20.305392 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f285c3705f5a8687c6d51d3fd7723de3ab8f5ba849da5239b577eb9f4b2681b3" Feb 24 15:12:20 crc kubenswrapper[4982]: I0224 15:12:20.317712 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"32995058-5676-4cbd-9df5-92cd2ed06ff7","Type":"ContainerStarted","Data":"d85b99689138a456df72650d99ec67f0aa3bb650c419ae3c46bda378863d6349"} Feb 24 15:12:21 crc kubenswrapper[4982]: I0224 15:12:21.107672 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xcvbd" Feb 24 15:12:21 crc kubenswrapper[4982]: I0224 15:12:21.915598 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:12:21 crc kubenswrapper[4982]: I0224 15:12:21.927132 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b6daa16f-c9d9-465a-8d00-711f5ef84326-etc-swift\") pod \"swift-storage-0\" (UID: \"b6daa16f-c9d9-465a-8d00-711f5ef84326\") " pod="openstack/swift-storage-0" Feb 24 15:12:21 crc kubenswrapper[4982]: I0224 15:12:21.993111 4982 scope.go:117] "RemoveContainer" containerID="c630c0f8ac0051fc683792e77bb9382a58b26ecc50cfd3f715dbd9fdf7dd379e" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.019859 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.341135 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xcvbd-config-97zww" event={"ID":"45908301-2616-42a0-b3cc-3e943152efe4","Type":"ContainerDied","Data":"19dfb7edc226d171a12cf56810baba3a4a84fbca32edbdcd403fae25950ecf84"} Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.341394 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19dfb7edc226d171a12cf56810baba3a4a84fbca32edbdcd403fae25950ecf84" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.341172 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.425567 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkfgk\" (UniqueName: \"kubernetes.io/projected/45908301-2616-42a0-b3cc-3e943152efe4-kube-api-access-nkfgk\") pod \"45908301-2616-42a0-b3cc-3e943152efe4\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.425660 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-scripts\") pod \"45908301-2616-42a0-b3cc-3e943152efe4\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.425701 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run-ovn\") pod \"45908301-2616-42a0-b3cc-3e943152efe4\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.425827 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-additional-scripts\") pod \"45908301-2616-42a0-b3cc-3e943152efe4\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.425937 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run\") pod \"45908301-2616-42a0-b3cc-3e943152efe4\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.425981 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-log-ovn\") pod \"45908301-2616-42a0-b3cc-3e943152efe4\" (UID: \"45908301-2616-42a0-b3cc-3e943152efe4\") " Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.426002 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "45908301-2616-42a0-b3cc-3e943152efe4" (UID: "45908301-2616-42a0-b3cc-3e943152efe4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.426394 4982 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.426449 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "45908301-2616-42a0-b3cc-3e943152efe4" (UID: "45908301-2616-42a0-b3cc-3e943152efe4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.426476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run" (OuterVolumeSpecName: "var-run") pod "45908301-2616-42a0-b3cc-3e943152efe4" (UID: "45908301-2616-42a0-b3cc-3e943152efe4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.426563 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "45908301-2616-42a0-b3cc-3e943152efe4" (UID: "45908301-2616-42a0-b3cc-3e943152efe4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.426956 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-scripts" (OuterVolumeSpecName: "scripts") pod "45908301-2616-42a0-b3cc-3e943152efe4" (UID: "45908301-2616-42a0-b3cc-3e943152efe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.433729 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45908301-2616-42a0-b3cc-3e943152efe4-kube-api-access-nkfgk" (OuterVolumeSpecName: "kube-api-access-nkfgk") pod "45908301-2616-42a0-b3cc-3e943152efe4" (UID: "45908301-2616-42a0-b3cc-3e943152efe4"). InnerVolumeSpecName "kube-api-access-nkfgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.528612 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.528648 4982 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45908301-2616-42a0-b3cc-3e943152efe4-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.528659 4982 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-run\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.528672 4982 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45908301-2616-42a0-b3cc-3e943152efe4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.528684 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkfgk\" (UniqueName: \"kubernetes.io/projected/45908301-2616-42a0-b3cc-3e943152efe4-kube-api-access-nkfgk\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.625547 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.941340 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Feb 24 15:12:22 crc kubenswrapper[4982]: I0224 15:12:22.951236 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Feb 24 15:12:23 crc kubenswrapper[4982]: I0224 15:12:23.044047 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 15:12:23 crc kubenswrapper[4982]: W0224 15:12:23.237170 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6daa16f_c9d9_465a_8d00_711f5ef84326.slice/crio-960de2080ca3c9ebca3b26a25ccf0f9ac063896692ce574858f8767827507c12 WatchSource:0}: Error finding container 960de2080ca3c9ebca3b26a25ccf0f9ac063896692ce574858f8767827507c12: Status 404 returned error can't find the container with id 960de2080ca3c9ebca3b26a25ccf0f9ac063896692ce574858f8767827507c12 Feb 24 15:12:23 crc kubenswrapper[4982]: I0224 15:12:23.352757 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"960de2080ca3c9ebca3b26a25ccf0f9ac063896692ce574858f8767827507c12"} Feb 24 15:12:23 crc kubenswrapper[4982]: I0224 15:12:23.355589 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xcvbd-config-97zww" Feb 24 15:12:23 crc kubenswrapper[4982]: I0224 15:12:23.355718 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerStarted","Data":"46ab2bd3f98c2ccbb33cefb6abc6d9ae11dc7002a3d38d54d76554b87f994fc8"} Feb 24 15:12:23 crc kubenswrapper[4982]: I0224 15:12:23.388986 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=28.594754305 podStartE2EDuration="1m15.388966258s" podCreationTimestamp="2026-02-24 15:11:08 +0000 UTC" firstStartedPulling="2026-02-24 15:11:35.658053196 +0000 UTC m=+1357.277111689" lastFinishedPulling="2026-02-24 15:12:22.452265159 +0000 UTC m=+1404.071323642" observedRunningTime="2026-02-24 15:12:23.382981915 +0000 UTC m=+1405.002040418" watchObservedRunningTime="2026-02-24 15:12:23.388966258 +0000 UTC m=+1405.008024751" Feb 24 15:12:23 crc kubenswrapper[4982]: I0224 15:12:23.455731 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xcvbd-config-97zww"] Feb 24 15:12:23 crc kubenswrapper[4982]: I0224 15:12:23.466757 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xcvbd-config-97zww"] Feb 24 15:12:24 crc kubenswrapper[4982]: I0224 15:12:24.381542 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"32995058-5676-4cbd-9df5-92cd2ed06ff7","Type":"ContainerStarted","Data":"4bf2b5a99a3638b6b060e393c4d2136abced9a48ae31b556aa4d5fd10a401413"} Feb 24 15:12:24 crc kubenswrapper[4982]: I0224 15:12:24.409201 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.6772231140000002 podStartE2EDuration="6.409179378s" podCreationTimestamp="2026-02-24 15:12:18 +0000 UTC" firstStartedPulling="2026-02-24 15:12:19.926736128 +0000 UTC m=+1401.545794621" lastFinishedPulling="2026-02-24 15:12:23.658692392 +0000 UTC m=+1405.277750885" observedRunningTime="2026-02-24 15:12:24.404000227 +0000 UTC m=+1406.023058720" watchObservedRunningTime="2026-02-24 15:12:24.409179378 +0000 UTC m=+1406.028237881" Feb 24 15:12:24 crc kubenswrapper[4982]: I0224 15:12:24.972007 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:24 crc kubenswrapper[4982]: I0224 15:12:24.972408 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:24 crc kubenswrapper[4982]: I0224 15:12:24.976898 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:25 crc kubenswrapper[4982]: I0224 15:12:25.157744 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45908301-2616-42a0-b3cc-3e943152efe4" path="/var/lib/kubelet/pods/45908301-2616-42a0-b3cc-3e943152efe4/volumes" Feb 24 15:12:25 crc kubenswrapper[4982]: I0224 15:12:25.392634 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"dda00869630c53382664dae1f8913b242500fd9094e87d37cfbffd06e147e5f6"} Feb 24 15:12:25 crc kubenswrapper[4982]: I0224 15:12:25.394260 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:26 crc kubenswrapper[4982]: I0224 15:12:26.405599 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"6b1dd350747d5eb559ec047bfce3cf2d59cdae0e11e09554623c95ed0ca20d57"} Feb 24 15:12:26 crc kubenswrapper[4982]: I0224 15:12:26.406223 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"f09b97a7ba125d1b89d35f64ed18750126a5a74ce44536872110ca2377e5f4b3"} Feb 24 15:12:26 crc kubenswrapper[4982]: I0224 15:12:26.406241 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"eacc6253882c08401660785f4d12c87cad91a519e6de6e90d787f70ffd11ace1"} Feb 24 15:12:28 crc kubenswrapper[4982]: I0224 15:12:28.113773 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:12:28 crc kubenswrapper[4982]: I0224 15:12:28.426825 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="config-reloader" containerID="cri-o://98f4c3bdec693be3c46bee686d4b453fc1e579e7724bbeee85b529a7e9c9188b" gracePeriod=600 Feb 24 15:12:28 crc kubenswrapper[4982]: I0224 15:12:28.426826 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="thanos-sidecar" containerID="cri-o://46ab2bd3f98c2ccbb33cefb6abc6d9ae11dc7002a3d38d54d76554b87f994fc8" gracePeriod=600 Feb 24 15:12:28 crc kubenswrapper[4982]: I0224 15:12:28.426747 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="prometheus" containerID="cri-o://5349a93da7cca0fcf40782299908c413ca3f436e19a045d64bb97189139516b5" gracePeriod=600 Feb 24 15:12:29 crc kubenswrapper[4982]: I0224 15:12:29.456365 4982 generic.go:334] "Generic (PLEG): container finished" podID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerID="46ab2bd3f98c2ccbb33cefb6abc6d9ae11dc7002a3d38d54d76554b87f994fc8" exitCode=0 Feb 24 15:12:29 crc kubenswrapper[4982]: I0224 15:12:29.456393 4982 generic.go:334] "Generic (PLEG): container finished" podID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerID="98f4c3bdec693be3c46bee686d4b453fc1e579e7724bbeee85b529a7e9c9188b" exitCode=0 Feb 24 15:12:29 crc kubenswrapper[4982]: I0224 15:12:29.456415 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerDied","Data":"46ab2bd3f98c2ccbb33cefb6abc6d9ae11dc7002a3d38d54d76554b87f994fc8"} Feb 24 15:12:29 crc kubenswrapper[4982]: I0224 15:12:29.456441 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerDied","Data":"98f4c3bdec693be3c46bee686d4b453fc1e579e7724bbeee85b529a7e9c9188b"} Feb 24 15:12:29 crc kubenswrapper[4982]: I0224 15:12:29.972821 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.142:9090/-/ready\": dial tcp 10.217.0.142:9090: connect: connection refused" Feb 24 15:12:30 crc kubenswrapper[4982]: I0224 15:12:30.467532 4982 generic.go:334] "Generic (PLEG): container finished" podID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerID="5349a93da7cca0fcf40782299908c413ca3f436e19a045d64bb97189139516b5" exitCode=0 Feb 24 15:12:30 crc kubenswrapper[4982]: I0224 15:12:30.467578 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerDied","Data":"5349a93da7cca0fcf40782299908c413ca3f436e19a045d64bb97189139516b5"} Feb 24 15:12:32 crc kubenswrapper[4982]: I0224 15:12:32.563643 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 24 15:12:32 crc kubenswrapper[4982]: I0224 15:12:32.624850 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Feb 24 15:12:32 crc kubenswrapper[4982]: I0224 15:12:32.940357 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Feb 24 15:12:32 crc kubenswrapper[4982]: I0224 15:12:32.944741 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:12:34 crc kubenswrapper[4982]: I0224 15:12:34.972687 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.142:9090/-/ready\": dial tcp 10.217.0.142:9090: connect: connection refused" Feb 24 15:12:36 crc kubenswrapper[4982]: E0224 15:12:36.612184 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 24 15:12:36 crc kubenswrapper[4982]: E0224 15:12:36.612605 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbpzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-82562_openstack(1ed4027c-742c-4789-9fad-fc912c419d6d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:12:36 crc kubenswrapper[4982]: E0224 15:12:36.613927 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-82562" podUID="1ed4027c-742c-4789-9fad-fc912c419d6d" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.020669 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.119955 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120114 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-2\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120138 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9rrc\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-kube-api-access-d9rrc\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120184 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-0\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120216 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-web-config\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120248 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d25d513-4841-4cbf-9e48-7ce1494d6450-config-out\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120283 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-thanos-prometheus-http-client-file\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120344 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-tls-assets\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120409 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-config\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.120451 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-1\") pod \"3d25d513-4841-4cbf-9e48-7ce1494d6450\" (UID: \"3d25d513-4841-4cbf-9e48-7ce1494d6450\") " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.121566 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.122680 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.123139 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.126301 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.128025 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-kube-api-access-d9rrc" (OuterVolumeSpecName: "kube-api-access-d9rrc") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "kube-api-access-d9rrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.129249 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d25d513-4841-4cbf-9e48-7ce1494d6450-config-out" (OuterVolumeSpecName: "config-out") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.137676 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-config" (OuterVolumeSpecName: "config") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.139618 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.153327 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.171734 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-web-config" (OuterVolumeSpecName: "web-config") pod "3d25d513-4841-4cbf-9e48-7ce1494d6450" (UID: "3d25d513-4841-4cbf-9e48-7ce1494d6450"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.223765 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") on node \"crc\" " Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.223798 4982 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.223810 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9rrc\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-kube-api-access-d9rrc\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.223819 4982 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.223956 4982 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-web-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.223965 4982 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d25d513-4841-4cbf-9e48-7ce1494d6450-config-out\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.223976 4982 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.224020 4982 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d25d513-4841-4cbf-9e48-7ce1494d6450-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.224028 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d25d513-4841-4cbf-9e48-7ce1494d6450-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.224036 4982 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d25d513-4841-4cbf-9e48-7ce1494d6450-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.260546 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.260746 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74") on node "crc" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.325604 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.549381 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d25d513-4841-4cbf-9e48-7ce1494d6450","Type":"ContainerDied","Data":"c0a23b33c481a8858f1121d311fddd2f2b8283da7878ab052d4377098cabb01f"} Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.549805 4982 scope.go:117] "RemoveContainer" containerID="46ab2bd3f98c2ccbb33cefb6abc6d9ae11dc7002a3d38d54d76554b87f994fc8" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.549437 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: E0224 15:12:37.550963 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-82562" podUID="1ed4027c-742c-4789-9fad-fc912c419d6d" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.578817 4982 scope.go:117] "RemoveContainer" containerID="98f4c3bdec693be3c46bee686d4b453fc1e579e7724bbeee85b529a7e9c9188b" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.740245 4982 scope.go:117] "RemoveContainer" containerID="5349a93da7cca0fcf40782299908c413ca3f436e19a045d64bb97189139516b5" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.766085 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.773264 4982 scope.go:117] "RemoveContainer" containerID="3cba6fe01c06b31027a2c8b1fb1adcd803560abb67f4eb8d66faf7d1c6d63cd8" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.780957 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.826824 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:12:37 crc kubenswrapper[4982]: E0224 15:12:37.827367 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45908301-2616-42a0-b3cc-3e943152efe4" containerName="ovn-config" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827390 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="45908301-2616-42a0-b3cc-3e943152efe4" containerName="ovn-config" Feb 24 15:12:37 crc kubenswrapper[4982]: E0224 15:12:37.827402 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="config-reloader" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827411 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="config-reloader" Feb 24 15:12:37 crc kubenswrapper[4982]: E0224 15:12:37.827421 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="init-config-reloader" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827430 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="init-config-reloader" Feb 24 15:12:37 crc kubenswrapper[4982]: E0224 15:12:37.827453 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668ccd5d-5607-4cb2-832c-54aa8960ff2b" containerName="mariadb-account-create-update" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827460 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="668ccd5d-5607-4cb2-832c-54aa8960ff2b" containerName="mariadb-account-create-update" Feb 24 15:12:37 crc kubenswrapper[4982]: E0224 15:12:37.827470 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="thanos-sidecar" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827477 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="thanos-sidecar" Feb 24 15:12:37 crc kubenswrapper[4982]: E0224 15:12:37.827523 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="prometheus" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827533 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="prometheus" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827774 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="prometheus" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827808 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="668ccd5d-5607-4cb2-832c-54aa8960ff2b" containerName="mariadb-account-create-update" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827824 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="config-reloader" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827836 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="45908301-2616-42a0-b3cc-3e943152efe4" containerName="ovn-config" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.827849 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" containerName="thanos-sidecar" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.844583 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.856101 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.860337 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.860543 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.860711 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.860812 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.860912 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.861585 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.861730 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-sdt6c" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.861838 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.866132 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.937975 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938280 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d4f4cf82-cc11-498c-a168-ca862bfcd361-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938333 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938359 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938376 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29xs\" (UniqueName: \"kubernetes.io/projected/d4f4cf82-cc11-498c-a168-ca862bfcd361-kube-api-access-d29xs\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938401 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938556 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938621 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938701 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d4f4cf82-cc11-498c-a168-ca862bfcd361-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.938770 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.939006 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.939093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:37 crc kubenswrapper[4982]: I0224 15:12:37.939130 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040533 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040629 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040670 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d4f4cf82-cc11-498c-a168-ca862bfcd361-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040733 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040768 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040797 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29xs\" (UniqueName: \"kubernetes.io/projected/d4f4cf82-cc11-498c-a168-ca862bfcd361-kube-api-access-d29xs\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040824 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040852 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.040876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.041104 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d4f4cf82-cc11-498c-a168-ca862bfcd361-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.041137 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.041216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.041252 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.041924 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.042958 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.043523 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d4f4cf82-cc11-498c-a168-ca862bfcd361-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.045017 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.046145 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d4f4cf82-cc11-498c-a168-ca862bfcd361-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.046635 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d4f4cf82-cc11-498c-a168-ca862bfcd361-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.046652 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.046751 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.046991 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.047343 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.047377 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4f4cf82-cc11-498c-a168-ca862bfcd361-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.051884 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.051922 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/974781c848519588b21abb088da5a5ce03d5802a8189d06d25b9b3e41a1054bf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.064207 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29xs\" (UniqueName: \"kubernetes.io/projected/d4f4cf82-cc11-498c-a168-ca862bfcd361-kube-api-access-d29xs\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.111586 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39eb75e4-eb06-428e-82e9-73b4d41efb74\") pod \"prometheus-metric-storage-0\" (UID: \"d4f4cf82-cc11-498c-a168-ca862bfcd361\") " pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.219585 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.564304 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"5cbb2a8bbb93f94f1e95d821f3dc296b9938e597ff95e6c38658c1b202bbbbf1"} Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.564712 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"c28c503d401597e8c062d1faa092d8a25b0c67c1cc241c127b9bc5f8a4d6e24f"} Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.564732 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"77cb282ccfd73b7b7173a330f5745551befd1c8400045277bdecd7e07c2a0cf2"} Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.564746 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"e0fcee7af282b9f1b50497219f8bd416645f33cdff0cfb0e6c4b8600544f5fb5"} Feb 24 15:12:38 crc kubenswrapper[4982]: I0224 15:12:38.676751 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 24 15:12:38 crc kubenswrapper[4982]: W0224 15:12:38.677368 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4f4cf82_cc11_498c_a168_ca862bfcd361.slice/crio-b85f97eac3a29d3a4003480aaf9cc7970b5536bc51ac832e0099b7d7aa279b79 WatchSource:0}: Error finding container b85f97eac3a29d3a4003480aaf9cc7970b5536bc51ac832e0099b7d7aa279b79: Status 404 returned error can't find the container with id b85f97eac3a29d3a4003480aaf9cc7970b5536bc51ac832e0099b7d7aa279b79 Feb 24 15:12:39 crc kubenswrapper[4982]: I0224 15:12:39.175024 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d25d513-4841-4cbf-9e48-7ce1494d6450" path="/var/lib/kubelet/pods/3d25d513-4841-4cbf-9e48-7ce1494d6450/volumes" Feb 24 15:12:39 crc kubenswrapper[4982]: I0224 15:12:39.577115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"cdf620f4fee9a5dc3f1f5a7a2931df6aae90b39c1507c3d5e824120e56475a22"} Feb 24 15:12:39 crc kubenswrapper[4982]: I0224 15:12:39.579021 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4f4cf82-cc11-498c-a168-ca862bfcd361","Type":"ContainerStarted","Data":"b85f97eac3a29d3a4003480aaf9cc7970b5536bc51ac832e0099b7d7aa279b79"} Feb 24 15:12:40 crc kubenswrapper[4982]: I0224 15:12:40.603396 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"2bd9dba2654ab3c18c9bca762b052b9975bce066cd5305ad33ce41ab574607cd"} Feb 24 15:12:40 crc kubenswrapper[4982]: I0224 15:12:40.603867 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"a089fb09c4156211ffe52638a8171f5670beb3775cab0a3ce48185ec0c52c261"} Feb 24 15:12:40 crc kubenswrapper[4982]: I0224 15:12:40.603920 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"68f1c2754876da0046508a4c11de1443aed531ce509582feb33a7975cba02c91"} Feb 24 15:12:40 crc kubenswrapper[4982]: I0224 15:12:40.603934 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"a2e7e21e434e5f46bb9826d479dad4ffe66af3af7cf10e881873fcf800b94414"} Feb 24 15:12:40 crc kubenswrapper[4982]: I0224 15:12:40.603946 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"c8813faeb552d5adf5c9514fb6d2fbcc50a015f5f497afd9f61283be7cc1430a"} Feb 24 15:12:41 crc kubenswrapper[4982]: I0224 15:12:41.616551 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b6daa16f-c9d9-465a-8d00-711f5ef84326","Type":"ContainerStarted","Data":"e97c635ee9e42f0a4fa412c4f0302b0acb617d59ab2a63e72ff6cbc2e4bebe22"} Feb 24 15:12:41 crc kubenswrapper[4982]: I0224 15:12:41.658973 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.658604639 podStartE2EDuration="53.658950605s" podCreationTimestamp="2026-02-24 15:11:48 +0000 UTC" firstStartedPulling="2026-02-24 15:12:23.249057173 +0000 UTC m=+1404.868115696" lastFinishedPulling="2026-02-24 15:12:39.249403169 +0000 UTC m=+1420.868461662" observedRunningTime="2026-02-24 15:12:41.654815643 +0000 UTC m=+1423.273874136" watchObservedRunningTime="2026-02-24 15:12:41.658950605 +0000 UTC m=+1423.278009098" Feb 24 15:12:41 crc kubenswrapper[4982]: I0224 15:12:41.942701 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hdhlf"] Feb 24 15:12:41 crc kubenswrapper[4982]: I0224 15:12:41.944672 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:41 crc kubenswrapper[4982]: I0224 15:12:41.947116 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 24 15:12:41 crc kubenswrapper[4982]: I0224 15:12:41.957637 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hdhlf"] Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.022805 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.023071 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.023227 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.023413 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.023438 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-config\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.023478 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jvhz\" (UniqueName: \"kubernetes.io/projected/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-kube-api-access-5jvhz\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.127086 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.127194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.127301 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.127328 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-config\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.127369 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jvhz\" (UniqueName: \"kubernetes.io/projected/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-kube-api-access-5jvhz\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.127451 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.128741 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.129384 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.130043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.130780 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.131408 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-config\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.153245 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jvhz\" (UniqueName: \"kubernetes.io/projected/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-kube-api-access-5jvhz\") pod \"dnsmasq-dns-5c79d794d7-hdhlf\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.265392 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.565734 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.625867 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.662035 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4f4cf82-cc11-498c-a168-ca862bfcd361","Type":"ContainerStarted","Data":"0b78dafa1790694bcbdfa6dbb47a2cf2451702a7f92379f2db2f87d5186ed688"} Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.775323 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hdhlf"] Feb 24 15:12:42 crc kubenswrapper[4982]: W0224 15:12:42.832868 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05fc4eea_f1dd_45dd_952c_dd01bff24a3b.slice/crio-89e58ff813e15709c17b2902fbab99d5467ee19c8dff7918ff997ccc25b4cba2 WatchSource:0}: Error finding container 89e58ff813e15709c17b2902fbab99d5467ee19c8dff7918ff997ccc25b4cba2: Status 404 returned error can't find the container with id 89e58ff813e15709c17b2902fbab99d5467ee19c8dff7918ff997ccc25b4cba2 Feb 24 15:12:42 crc kubenswrapper[4982]: I0224 15:12:42.942679 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.399490 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-h2snk"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.400757 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.409358 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-affe-account-create-update-22rz7"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.410872 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.412512 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.419850 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h2snk"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.430596 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-affe-account-create-update-22rz7"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.499713 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-c7jsl"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.501431 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.508915 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-c7jsl"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.568606 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d299bdd-7d22-4c41-8dbf-c09b655e2705-operator-scripts\") pod \"cinder-db-create-h2snk\" (UID: \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\") " pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.568675 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xht\" (UniqueName: \"kubernetes.io/projected/21152da4-0b6c-43d5-979f-6178105a507a-kube-api-access-75xht\") pod \"cinder-affe-account-create-update-22rz7\" (UID: \"21152da4-0b6c-43d5-979f-6178105a507a\") " pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.568767 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lql\" (UniqueName: \"kubernetes.io/projected/6d299bdd-7d22-4c41-8dbf-c09b655e2705-kube-api-access-r9lql\") pod \"cinder-db-create-h2snk\" (UID: \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\") " pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.568797 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21152da4-0b6c-43d5-979f-6178105a507a-operator-scripts\") pod \"cinder-affe-account-create-update-22rz7\" (UID: \"21152da4-0b6c-43d5-979f-6178105a507a\") " pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.603819 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-cca8-account-create-update-q9fwj"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.605477 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.611960 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.614544 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-cca8-account-create-update-q9fwj"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.671116 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21152da4-0b6c-43d5-979f-6178105a507a-operator-scripts\") pod \"cinder-affe-account-create-update-22rz7\" (UID: \"21152da4-0b6c-43d5-979f-6178105a507a\") " pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.671178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-operator-scripts\") pod \"heat-db-create-c7jsl\" (UID: \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\") " pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.671302 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d299bdd-7d22-4c41-8dbf-c09b655e2705-operator-scripts\") pod \"cinder-db-create-h2snk\" (UID: \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\") " pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.671339 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75xht\" (UniqueName: \"kubernetes.io/projected/21152da4-0b6c-43d5-979f-6178105a507a-kube-api-access-75xht\") pod \"cinder-affe-account-create-update-22rz7\" (UID: \"21152da4-0b6c-43d5-979f-6178105a507a\") " pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.671378 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789pv\" (UniqueName: \"kubernetes.io/projected/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-kube-api-access-789pv\") pod \"heat-db-create-c7jsl\" (UID: \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\") " pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.671424 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9lql\" (UniqueName: \"kubernetes.io/projected/6d299bdd-7d22-4c41-8dbf-c09b655e2705-kube-api-access-r9lql\") pod \"cinder-db-create-h2snk\" (UID: \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\") " pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.672577 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21152da4-0b6c-43d5-979f-6178105a507a-operator-scripts\") pod \"cinder-affe-account-create-update-22rz7\" (UID: \"21152da4-0b6c-43d5-979f-6178105a507a\") " pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.672694 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d299bdd-7d22-4c41-8dbf-c09b655e2705-operator-scripts\") pod \"cinder-db-create-h2snk\" (UID: \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\") " pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.674598 4982 generic.go:334] "Generic (PLEG): container finished" podID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerID="968d4188192e031f40dd2366a95130a7924ee572be0dd4cb10d663b47855b9f5" exitCode=0 Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.674727 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" event={"ID":"05fc4eea-f1dd-45dd-952c-dd01bff24a3b","Type":"ContainerDied","Data":"968d4188192e031f40dd2366a95130a7924ee572be0dd4cb10d663b47855b9f5"} Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.674765 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" event={"ID":"05fc4eea-f1dd-45dd-952c-dd01bff24a3b","Type":"ContainerStarted","Data":"89e58ff813e15709c17b2902fbab99d5467ee19c8dff7918ff997ccc25b4cba2"} Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.706180 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75xht\" (UniqueName: \"kubernetes.io/projected/21152da4-0b6c-43d5-979f-6178105a507a-kube-api-access-75xht\") pod \"cinder-affe-account-create-update-22rz7\" (UID: \"21152da4-0b6c-43d5-979f-6178105a507a\") " pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.709099 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9lql\" (UniqueName: \"kubernetes.io/projected/6d299bdd-7d22-4c41-8dbf-c09b655e2705-kube-api-access-r9lql\") pod \"cinder-db-create-h2snk\" (UID: \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\") " pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.721983 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.730373 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.746403 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-fd44-account-create-update-xsf4h"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.747826 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.752353 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.764539 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-fd44-account-create-update-xsf4h"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.773682 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789pv\" (UniqueName: \"kubernetes.io/projected/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-kube-api-access-789pv\") pod \"heat-db-create-c7jsl\" (UID: \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\") " pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.773810 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-operator-scripts\") pod \"heat-db-create-c7jsl\" (UID: \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\") " pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.773912 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7svpk\" (UniqueName: \"kubernetes.io/projected/83e0de90-6da0-41e9-9b1d-f0fd256d010c-kube-api-access-7svpk\") pod \"barbican-cca8-account-create-update-q9fwj\" (UID: \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\") " pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.774016 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e0de90-6da0-41e9-9b1d-f0fd256d010c-operator-scripts\") pod \"barbican-cca8-account-create-update-q9fwj\" (UID: \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\") " pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.777608 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-operator-scripts\") pod \"heat-db-create-c7jsl\" (UID: \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\") " pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.785406 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-flp5g"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.786891 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.800258 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-flp5g"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.805297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789pv\" (UniqueName: \"kubernetes.io/projected/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-kube-api-access-789pv\") pod \"heat-db-create-c7jsl\" (UID: \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\") " pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.815779 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.819656 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dsc9t"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.821346 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.847550 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.847787 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.847948 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.848133 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8hsbr" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.883917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bn5m\" (UniqueName: \"kubernetes.io/projected/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-kube-api-access-4bn5m\") pod \"barbican-db-create-flp5g\" (UID: \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\") " pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.883962 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7svpk\" (UniqueName: \"kubernetes.io/projected/83e0de90-6da0-41e9-9b1d-f0fd256d010c-kube-api-access-7svpk\") pod \"barbican-cca8-account-create-update-q9fwj\" (UID: \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\") " pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.884088 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e0de90-6da0-41e9-9b1d-f0fd256d010c-operator-scripts\") pod \"barbican-cca8-account-create-update-q9fwj\" (UID: \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\") " pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.884253 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-combined-ca-bundle\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.884297 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sd6b\" (UniqueName: \"kubernetes.io/projected/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-kube-api-access-8sd6b\") pod \"heat-fd44-account-create-update-xsf4h\" (UID: \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\") " pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.895644 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-operator-scripts\") pod \"heat-fd44-account-create-update-xsf4h\" (UID: \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\") " pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.895707 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm2cz\" (UniqueName: \"kubernetes.io/projected/b1201af6-076e-430b-adb8-3699f6296afe-kube-api-access-mm2cz\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.895763 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-operator-scripts\") pod \"barbican-db-create-flp5g\" (UID: \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\") " pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.895818 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-config-data\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.896693 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e0de90-6da0-41e9-9b1d-f0fd256d010c-operator-scripts\") pod \"barbican-cca8-account-create-update-q9fwj\" (UID: \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\") " pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.896762 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dsc9t"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.924782 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-92mf8"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.928010 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.955575 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-92mf8"] Feb 24 15:12:43 crc kubenswrapper[4982]: I0224 15:12:43.960692 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7svpk\" (UniqueName: \"kubernetes.io/projected/83e0de90-6da0-41e9-9b1d-f0fd256d010c-kube-api-access-7svpk\") pod \"barbican-cca8-account-create-update-q9fwj\" (UID: \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\") " pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000409 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-operator-scripts\") pod \"neutron-db-create-92mf8\" (UID: \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\") " pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000617 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-combined-ca-bundle\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000670 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sd6b\" (UniqueName: \"kubernetes.io/projected/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-kube-api-access-8sd6b\") pod \"heat-fd44-account-create-update-xsf4h\" (UID: \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\") " pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000794 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-operator-scripts\") pod \"heat-fd44-account-create-update-xsf4h\" (UID: \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\") " pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000818 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm2cz\" (UniqueName: \"kubernetes.io/projected/b1201af6-076e-430b-adb8-3699f6296afe-kube-api-access-mm2cz\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000849 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-operator-scripts\") pod \"barbican-db-create-flp5g\" (UID: \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\") " pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000879 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnx7t\" (UniqueName: \"kubernetes.io/projected/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-kube-api-access-xnx7t\") pod \"neutron-db-create-92mf8\" (UID: \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\") " pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000912 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-config-data\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.000979 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bn5m\" (UniqueName: \"kubernetes.io/projected/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-kube-api-access-4bn5m\") pod \"barbican-db-create-flp5g\" (UID: \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\") " pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.002619 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-operator-scripts\") pod \"heat-fd44-account-create-update-xsf4h\" (UID: \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\") " pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.006906 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-operator-scripts\") pod \"barbican-db-create-flp5g\" (UID: \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\") " pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.013815 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-config-data\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.018973 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-combined-ca-bundle\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.032419 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm2cz\" (UniqueName: \"kubernetes.io/projected/b1201af6-076e-430b-adb8-3699f6296afe-kube-api-access-mm2cz\") pod \"keystone-db-sync-dsc9t\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.034487 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e5cf-account-create-update-l6nx8"] Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.036971 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.038381 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bn5m\" (UniqueName: \"kubernetes.io/projected/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-kube-api-access-4bn5m\") pod \"barbican-db-create-flp5g\" (UID: \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\") " pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.044333 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.045225 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sd6b\" (UniqueName: \"kubernetes.io/projected/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-kube-api-access-8sd6b\") pod \"heat-fd44-account-create-update-xsf4h\" (UID: \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\") " pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.083689 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e5cf-account-create-update-l6nx8"] Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.102585 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnx7t\" (UniqueName: \"kubernetes.io/projected/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-kube-api-access-xnx7t\") pod \"neutron-db-create-92mf8\" (UID: \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\") " pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.102740 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxpl\" (UniqueName: \"kubernetes.io/projected/0b424dbf-bc16-404a-806e-7be5855b43c8-kube-api-access-fpxpl\") pod \"neutron-e5cf-account-create-update-l6nx8\" (UID: \"0b424dbf-bc16-404a-806e-7be5855b43c8\") " pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.102797 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b424dbf-bc16-404a-806e-7be5855b43c8-operator-scripts\") pod \"neutron-e5cf-account-create-update-l6nx8\" (UID: \"0b424dbf-bc16-404a-806e-7be5855b43c8\") " pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.103387 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-operator-scripts\") pod \"neutron-db-create-92mf8\" (UID: \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\") " pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.106164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-operator-scripts\") pod \"neutron-db-create-92mf8\" (UID: \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\") " pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.141337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnx7t\" (UniqueName: \"kubernetes.io/projected/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-kube-api-access-xnx7t\") pod \"neutron-db-create-92mf8\" (UID: \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\") " pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.197621 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.207567 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpxpl\" (UniqueName: \"kubernetes.io/projected/0b424dbf-bc16-404a-806e-7be5855b43c8-kube-api-access-fpxpl\") pod \"neutron-e5cf-account-create-update-l6nx8\" (UID: \"0b424dbf-bc16-404a-806e-7be5855b43c8\") " pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.207871 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b424dbf-bc16-404a-806e-7be5855b43c8-operator-scripts\") pod \"neutron-e5cf-account-create-update-l6nx8\" (UID: \"0b424dbf-bc16-404a-806e-7be5855b43c8\") " pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.213794 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b424dbf-bc16-404a-806e-7be5855b43c8-operator-scripts\") pod \"neutron-e5cf-account-create-update-l6nx8\" (UID: \"0b424dbf-bc16-404a-806e-7be5855b43c8\") " pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.220973 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.221061 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.233114 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpxpl\" (UniqueName: \"kubernetes.io/projected/0b424dbf-bc16-404a-806e-7be5855b43c8-kube-api-access-fpxpl\") pod \"neutron-e5cf-account-create-update-l6nx8\" (UID: \"0b424dbf-bc16-404a-806e-7be5855b43c8\") " pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.242421 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.292815 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.377224 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.509209 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h2snk"] Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.581488 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-affe-account-create-update-22rz7"] Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.686214 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-c7jsl"] Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.696914 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-affe-account-create-update-22rz7" event={"ID":"21152da4-0b6c-43d5-979f-6178105a507a","Type":"ContainerStarted","Data":"8c8363b30fd7c5e75aa5abea604c8d36a433cd6357f381a34397e20a0b31edea"} Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.699795 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" event={"ID":"05fc4eea-f1dd-45dd-952c-dd01bff24a3b","Type":"ContainerStarted","Data":"6172f6d40939f3dfda016aad513a08117a4d14c6bf36822941ce521ad74135a2"} Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.699943 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.704202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h2snk" event={"ID":"6d299bdd-7d22-4c41-8dbf-c09b655e2705","Type":"ContainerStarted","Data":"35aee2249e08886590244a336716412ece3bc2ec02707fdd355193c4b3e72e12"} Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.725487 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" podStartSLOduration=3.725468935 podStartE2EDuration="3.725468935s" podCreationTimestamp="2026-02-24 15:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:44.724787716 +0000 UTC m=+1426.343846209" watchObservedRunningTime="2026-02-24 15:12:44.725468935 +0000 UTC m=+1426.344527418" Feb 24 15:12:44 crc kubenswrapper[4982]: W0224 15:12:44.918151 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa862be_6ab3_4fea_85d4_f08b59e0dbc8.slice/crio-ae389c51ec17d3130e17f0ee6a1ee3cfa65c8d7a8894c1120609c453955e90a7 WatchSource:0}: Error finding container ae389c51ec17d3130e17f0ee6a1ee3cfa65c8d7a8894c1120609c453955e90a7: Status 404 returned error can't find the container with id ae389c51ec17d3130e17f0ee6a1ee3cfa65c8d7a8894c1120609c453955e90a7 Feb 24 15:12:44 crc kubenswrapper[4982]: I0224 15:12:44.924771 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-flp5g"] Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.212316 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-cca8-account-create-update-q9fwj"] Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.229565 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dsc9t"] Feb 24 15:12:45 crc kubenswrapper[4982]: W0224 15:12:45.250751 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83e0de90_6da0_41e9_9b1d_f0fd256d010c.slice/crio-66af37d07d85b51d181f815af8388c799a90cbdb71888ff39abbb248e6cc1c51 WatchSource:0}: Error finding container 66af37d07d85b51d181f815af8388c799a90cbdb71888ff39abbb248e6cc1c51: Status 404 returned error can't find the container with id 66af37d07d85b51d181f815af8388c799a90cbdb71888ff39abbb248e6cc1c51 Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.258524 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-fd44-account-create-update-xsf4h"] Feb 24 15:12:45 crc kubenswrapper[4982]: W0224 15:12:45.309739 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd4a2df_971c_4ebe_9434_01c1ae4f55d8.slice/crio-d47bb5eae5c347beb1d235fe7ff379b77f660ac0e4bce3339bbce8ed61cb2fa9 WatchSource:0}: Error finding container d47bb5eae5c347beb1d235fe7ff379b77f660ac0e4bce3339bbce8ed61cb2fa9: Status 404 returned error can't find the container with id d47bb5eae5c347beb1d235fe7ff379b77f660ac0e4bce3339bbce8ed61cb2fa9 Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.432280 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e5cf-account-create-update-l6nx8"] Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.445622 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-92mf8"] Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.720242 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-92mf8" event={"ID":"40519aa0-b7b2-4e5c-898c-b73365c9d8f0","Type":"ContainerStarted","Data":"3b30746b4586de72bc75365a80f300c09b79873df16edd8aa0362c32420e96a5"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.722320 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5cf-account-create-update-l6nx8" event={"ID":"0b424dbf-bc16-404a-806e-7be5855b43c8","Type":"ContainerStarted","Data":"ff7a14c0ad96b6db6b39c5ce58af0cde99305e1490a62a7534c2464139183c18"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.724564 4982 generic.go:334] "Generic (PLEG): container finished" podID="6d299bdd-7d22-4c41-8dbf-c09b655e2705" containerID="20656f755731f28474d24df51773671b95b6e7a395d817f5c2c3bffdd04504e4" exitCode=0 Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.724648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h2snk" event={"ID":"6d299bdd-7d22-4c41-8dbf-c09b655e2705","Type":"ContainerDied","Data":"20656f755731f28474d24df51773671b95b6e7a395d817f5c2c3bffdd04504e4"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.727811 4982 generic.go:334] "Generic (PLEG): container finished" podID="21152da4-0b6c-43d5-979f-6178105a507a" containerID="77db16a9a7024e3405855d44796cb4b2bd9a923a4ebc9c96c95a7eddeee27918" exitCode=0 Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.727937 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-affe-account-create-update-22rz7" event={"ID":"21152da4-0b6c-43d5-979f-6178105a507a","Type":"ContainerDied","Data":"77db16a9a7024e3405855d44796cb4b2bd9a923a4ebc9c96c95a7eddeee27918"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.731554 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-fd44-account-create-update-xsf4h" event={"ID":"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8","Type":"ContainerStarted","Data":"d47bb5eae5c347beb1d235fe7ff379b77f660ac0e4bce3339bbce8ed61cb2fa9"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.733520 4982 generic.go:334] "Generic (PLEG): container finished" podID="aaa862be-6ab3-4fea-85d4-f08b59e0dbc8" containerID="ac54d93d444f97afb63a0a7b149d65d9f8f7f122655e37b5fce520dfe571c2cb" exitCode=0 Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.733578 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-flp5g" event={"ID":"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8","Type":"ContainerDied","Data":"ac54d93d444f97afb63a0a7b149d65d9f8f7f122655e37b5fce520dfe571c2cb"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.733598 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-flp5g" event={"ID":"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8","Type":"ContainerStarted","Data":"ae389c51ec17d3130e17f0ee6a1ee3cfa65c8d7a8894c1120609c453955e90a7"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.735348 4982 generic.go:334] "Generic (PLEG): container finished" podID="b2de08c7-91d6-4dd1-af64-cc09f7f43e0c" containerID="b79c94e05afb859fb9752b28190d3f76dea1b12897f2c2aeaec063824cc287b8" exitCode=0 Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.735417 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-c7jsl" event={"ID":"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c","Type":"ContainerDied","Data":"b79c94e05afb859fb9752b28190d3f76dea1b12897f2c2aeaec063824cc287b8"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.735442 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-c7jsl" event={"ID":"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c","Type":"ContainerStarted","Data":"93a1a94f36f9ad8c06183b1c11c0d785982720dfb8bad07c0938ecb78ca88242"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.744054 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cca8-account-create-update-q9fwj" event={"ID":"83e0de90-6da0-41e9-9b1d-f0fd256d010c","Type":"ContainerStarted","Data":"66af37d07d85b51d181f815af8388c799a90cbdb71888ff39abbb248e6cc1c51"} Feb 24 15:12:45 crc kubenswrapper[4982]: I0224 15:12:45.745582 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dsc9t" event={"ID":"b1201af6-076e-430b-adb8-3699f6296afe","Type":"ContainerStarted","Data":"f46d0064a615a57483ce748941d72cb70cfc4c3621bcd5e58e4aaf018737575d"} Feb 24 15:12:46 crc kubenswrapper[4982]: E0224 15:12:46.340336 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40519aa0_b7b2_4e5c_898c_b73365c9d8f0.slice/crio-conmon-4b1ea4d2bcdcf25d33b0d93e969054a7ceecce78c81a96d7388ad07c54526a1d.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:12:46 crc kubenswrapper[4982]: I0224 15:12:46.766798 4982 generic.go:334] "Generic (PLEG): container finished" podID="0b424dbf-bc16-404a-806e-7be5855b43c8" containerID="8fa6882bfad1c4c27ba2e8449c7ea780e97711c3921be43f0718b818a97c4f72" exitCode=0 Feb 24 15:12:46 crc kubenswrapper[4982]: I0224 15:12:46.766872 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5cf-account-create-update-l6nx8" event={"ID":"0b424dbf-bc16-404a-806e-7be5855b43c8","Type":"ContainerDied","Data":"8fa6882bfad1c4c27ba2e8449c7ea780e97711c3921be43f0718b818a97c4f72"} Feb 24 15:12:46 crc kubenswrapper[4982]: I0224 15:12:46.770342 4982 generic.go:334] "Generic (PLEG): container finished" podID="83e0de90-6da0-41e9-9b1d-f0fd256d010c" containerID="acb127858722980d1d59986ecc9ed697b53863bdb064a31dfd224ef83ae4d78d" exitCode=0 Feb 24 15:12:46 crc kubenswrapper[4982]: I0224 15:12:46.770492 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cca8-account-create-update-q9fwj" event={"ID":"83e0de90-6da0-41e9-9b1d-f0fd256d010c","Type":"ContainerDied","Data":"acb127858722980d1d59986ecc9ed697b53863bdb064a31dfd224ef83ae4d78d"} Feb 24 15:12:46 crc kubenswrapper[4982]: I0224 15:12:46.775336 4982 generic.go:334] "Generic (PLEG): container finished" podID="40519aa0-b7b2-4e5c-898c-b73365c9d8f0" containerID="4b1ea4d2bcdcf25d33b0d93e969054a7ceecce78c81a96d7388ad07c54526a1d" exitCode=0 Feb 24 15:12:46 crc kubenswrapper[4982]: I0224 15:12:46.775433 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-92mf8" event={"ID":"40519aa0-b7b2-4e5c-898c-b73365c9d8f0","Type":"ContainerDied","Data":"4b1ea4d2bcdcf25d33b0d93e969054a7ceecce78c81a96d7388ad07c54526a1d"} Feb 24 15:12:46 crc kubenswrapper[4982]: I0224 15:12:46.777827 4982 generic.go:334] "Generic (PLEG): container finished" podID="7dd4a2df-971c-4ebe-9434-01c1ae4f55d8" containerID="cfab23c02d2b7f2459ada234ecc4460764c981373974b97c5e3a5006e8aefccd" exitCode=0 Feb 24 15:12:46 crc kubenswrapper[4982]: I0224 15:12:46.777929 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-fd44-account-create-update-xsf4h" event={"ID":"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8","Type":"ContainerDied","Data":"cfab23c02d2b7f2459ada234ecc4460764c981373974b97c5e3a5006e8aefccd"} Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.393650 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.493299 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-operator-scripts\") pod \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\" (UID: \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\") " Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.493394 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bn5m\" (UniqueName: \"kubernetes.io/projected/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-kube-api-access-4bn5m\") pod \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\" (UID: \"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8\") " Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.494185 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aaa862be-6ab3-4fea-85d4-f08b59e0dbc8" (UID: "aaa862be-6ab3-4fea-85d4-f08b59e0dbc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.502850 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-kube-api-access-4bn5m" (OuterVolumeSpecName: "kube-api-access-4bn5m") pod "aaa862be-6ab3-4fea-85d4-f08b59e0dbc8" (UID: "aaa862be-6ab3-4fea-85d4-f08b59e0dbc8"). InnerVolumeSpecName "kube-api-access-4bn5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.595710 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.595763 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bn5m\" (UniqueName: \"kubernetes.io/projected/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8-kube-api-access-4bn5m\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.597270 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.607899 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.613551 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.697456 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-operator-scripts\") pod \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\" (UID: \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\") " Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.697536 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d299bdd-7d22-4c41-8dbf-c09b655e2705-operator-scripts\") pod \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\" (UID: \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\") " Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.697718 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21152da4-0b6c-43d5-979f-6178105a507a-operator-scripts\") pod \"21152da4-0b6c-43d5-979f-6178105a507a\" (UID: \"21152da4-0b6c-43d5-979f-6178105a507a\") " Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.697828 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-789pv\" (UniqueName: \"kubernetes.io/projected/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-kube-api-access-789pv\") pod \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\" (UID: \"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c\") " Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.697935 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9lql\" (UniqueName: \"kubernetes.io/projected/6d299bdd-7d22-4c41-8dbf-c09b655e2705-kube-api-access-r9lql\") pod \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\" (UID: \"6d299bdd-7d22-4c41-8dbf-c09b655e2705\") " Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.699395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d299bdd-7d22-4c41-8dbf-c09b655e2705-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d299bdd-7d22-4c41-8dbf-c09b655e2705" (UID: "6d299bdd-7d22-4c41-8dbf-c09b655e2705"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.699615 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75xht\" (UniqueName: \"kubernetes.io/projected/21152da4-0b6c-43d5-979f-6178105a507a-kube-api-access-75xht\") pod \"21152da4-0b6c-43d5-979f-6178105a507a\" (UID: \"21152da4-0b6c-43d5-979f-6178105a507a\") " Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.700216 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2de08c7-91d6-4dd1-af64-cc09f7f43e0c" (UID: "b2de08c7-91d6-4dd1-af64-cc09f7f43e0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.700662 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.700688 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d299bdd-7d22-4c41-8dbf-c09b655e2705-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.705390 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21152da4-0b6c-43d5-979f-6178105a507a-kube-api-access-75xht" (OuterVolumeSpecName: "kube-api-access-75xht") pod "21152da4-0b6c-43d5-979f-6178105a507a" (UID: "21152da4-0b6c-43d5-979f-6178105a507a"). InnerVolumeSpecName "kube-api-access-75xht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.705898 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-kube-api-access-789pv" (OuterVolumeSpecName: "kube-api-access-789pv") pod "b2de08c7-91d6-4dd1-af64-cc09f7f43e0c" (UID: "b2de08c7-91d6-4dd1-af64-cc09f7f43e0c"). InnerVolumeSpecName "kube-api-access-789pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.707333 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21152da4-0b6c-43d5-979f-6178105a507a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21152da4-0b6c-43d5-979f-6178105a507a" (UID: "21152da4-0b6c-43d5-979f-6178105a507a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.717714 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d299bdd-7d22-4c41-8dbf-c09b655e2705-kube-api-access-r9lql" (OuterVolumeSpecName: "kube-api-access-r9lql") pod "6d299bdd-7d22-4c41-8dbf-c09b655e2705" (UID: "6d299bdd-7d22-4c41-8dbf-c09b655e2705"). InnerVolumeSpecName "kube-api-access-r9lql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.794741 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-affe-account-create-update-22rz7" event={"ID":"21152da4-0b6c-43d5-979f-6178105a507a","Type":"ContainerDied","Data":"8c8363b30fd7c5e75aa5abea604c8d36a433cd6357f381a34397e20a0b31edea"} Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.794788 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c8363b30fd7c5e75aa5abea604c8d36a433cd6357f381a34397e20a0b31edea" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.794799 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-affe-account-create-update-22rz7" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.799347 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-flp5g" event={"ID":"aaa862be-6ab3-4fea-85d4-f08b59e0dbc8","Type":"ContainerDied","Data":"ae389c51ec17d3130e17f0ee6a1ee3cfa65c8d7a8894c1120609c453955e90a7"} Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.799369 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae389c51ec17d3130e17f0ee6a1ee3cfa65c8d7a8894c1120609c453955e90a7" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.799371 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-flp5g" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.802334 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-c7jsl" event={"ID":"b2de08c7-91d6-4dd1-af64-cc09f7f43e0c","Type":"ContainerDied","Data":"93a1a94f36f9ad8c06183b1c11c0d785982720dfb8bad07c0938ecb78ca88242"} Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.802402 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93a1a94f36f9ad8c06183b1c11c0d785982720dfb8bad07c0938ecb78ca88242" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.802519 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-c7jsl" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.805775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h2snk" event={"ID":"6d299bdd-7d22-4c41-8dbf-c09b655e2705","Type":"ContainerDied","Data":"35aee2249e08886590244a336716412ece3bc2ec02707fdd355193c4b3e72e12"} Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.805797 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h2snk" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.805815 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35aee2249e08886590244a336716412ece3bc2ec02707fdd355193c4b3e72e12" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.806470 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9lql\" (UniqueName: \"kubernetes.io/projected/6d299bdd-7d22-4c41-8dbf-c09b655e2705-kube-api-access-r9lql\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.806781 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75xht\" (UniqueName: \"kubernetes.io/projected/21152da4-0b6c-43d5-979f-6178105a507a-kube-api-access-75xht\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.806802 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21152da4-0b6c-43d5-979f-6178105a507a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:47 crc kubenswrapper[4982]: I0224 15:12:47.806817 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-789pv\" (UniqueName: \"kubernetes.io/projected/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c-kube-api-access-789pv\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:48 crc kubenswrapper[4982]: I0224 15:12:48.816830 4982 generic.go:334] "Generic (PLEG): container finished" podID="d4f4cf82-cc11-498c-a168-ca862bfcd361" containerID="0b78dafa1790694bcbdfa6dbb47a2cf2451702a7f92379f2db2f87d5186ed688" exitCode=0 Feb 24 15:12:48 crc kubenswrapper[4982]: I0224 15:12:48.816924 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4f4cf82-cc11-498c-a168-ca862bfcd361","Type":"ContainerDied","Data":"0b78dafa1790694bcbdfa6dbb47a2cf2451702a7f92379f2db2f87d5186ed688"} Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.837734 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-92mf8" event={"ID":"40519aa0-b7b2-4e5c-898c-b73365c9d8f0","Type":"ContainerDied","Data":"3b30746b4586de72bc75365a80f300c09b79873df16edd8aa0362c32420e96a5"} Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.838369 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b30746b4586de72bc75365a80f300c09b79873df16edd8aa0362c32420e96a5" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.840777 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-fd44-account-create-update-xsf4h" event={"ID":"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8","Type":"ContainerDied","Data":"d47bb5eae5c347beb1d235fe7ff379b77f660ac0e4bce3339bbce8ed61cb2fa9"} Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.840897 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d47bb5eae5c347beb1d235fe7ff379b77f660ac0e4bce3339bbce8ed61cb2fa9" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.841994 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.842382 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5cf-account-create-update-l6nx8" event={"ID":"0b424dbf-bc16-404a-806e-7be5855b43c8","Type":"ContainerDied","Data":"ff7a14c0ad96b6db6b39c5ce58af0cde99305e1490a62a7534c2464139183c18"} Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.842435 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff7a14c0ad96b6db6b39c5ce58af0cde99305e1490a62a7534c2464139183c18" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.843723 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cca8-account-create-update-q9fwj" event={"ID":"83e0de90-6da0-41e9-9b1d-f0fd256d010c","Type":"ContainerDied","Data":"66af37d07d85b51d181f815af8388c799a90cbdb71888ff39abbb248e6cc1c51"} Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.843764 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66af37d07d85b51d181f815af8388c799a90cbdb71888ff39abbb248e6cc1c51" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.849516 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.870360 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpxpl\" (UniqueName: \"kubernetes.io/projected/0b424dbf-bc16-404a-806e-7be5855b43c8-kube-api-access-fpxpl\") pod \"0b424dbf-bc16-404a-806e-7be5855b43c8\" (UID: \"0b424dbf-bc16-404a-806e-7be5855b43c8\") " Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.870443 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b424dbf-bc16-404a-806e-7be5855b43c8-operator-scripts\") pod \"0b424dbf-bc16-404a-806e-7be5855b43c8\" (UID: \"0b424dbf-bc16-404a-806e-7be5855b43c8\") " Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.870642 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-operator-scripts\") pod \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\" (UID: \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\") " Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.870849 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnx7t\" (UniqueName: \"kubernetes.io/projected/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-kube-api-access-xnx7t\") pod \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\" (UID: \"40519aa0-b7b2-4e5c-898c-b73365c9d8f0\") " Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.872908 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40519aa0-b7b2-4e5c-898c-b73365c9d8f0" (UID: "40519aa0-b7b2-4e5c-898c-b73365c9d8f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.873363 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b424dbf-bc16-404a-806e-7be5855b43c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b424dbf-bc16-404a-806e-7be5855b43c8" (UID: "0b424dbf-bc16-404a-806e-7be5855b43c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.885855 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-kube-api-access-xnx7t" (OuterVolumeSpecName: "kube-api-access-xnx7t") pod "40519aa0-b7b2-4e5c-898c-b73365c9d8f0" (UID: "40519aa0-b7b2-4e5c-898c-b73365c9d8f0"). InnerVolumeSpecName "kube-api-access-xnx7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.885935 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b424dbf-bc16-404a-806e-7be5855b43c8-kube-api-access-fpxpl" (OuterVolumeSpecName: "kube-api-access-fpxpl") pod "0b424dbf-bc16-404a-806e-7be5855b43c8" (UID: "0b424dbf-bc16-404a-806e-7be5855b43c8"). InnerVolumeSpecName "kube-api-access-fpxpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.889002 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.976645 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7svpk\" (UniqueName: \"kubernetes.io/projected/83e0de90-6da0-41e9-9b1d-f0fd256d010c-kube-api-access-7svpk\") pod \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\" (UID: \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\") " Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.977100 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e0de90-6da0-41e9-9b1d-f0fd256d010c-operator-scripts\") pod \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\" (UID: \"83e0de90-6da0-41e9-9b1d-f0fd256d010c\") " Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.977652 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpxpl\" (UniqueName: \"kubernetes.io/projected/0b424dbf-bc16-404a-806e-7be5855b43c8-kube-api-access-fpxpl\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.977674 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b424dbf-bc16-404a-806e-7be5855b43c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.977686 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.977698 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnx7t\" (UniqueName: \"kubernetes.io/projected/40519aa0-b7b2-4e5c-898c-b73365c9d8f0-kube-api-access-xnx7t\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.977881 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83e0de90-6da0-41e9-9b1d-f0fd256d010c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83e0de90-6da0-41e9-9b1d-f0fd256d010c" (UID: "83e0de90-6da0-41e9-9b1d-f0fd256d010c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:50 crc kubenswrapper[4982]: I0224 15:12:50.980335 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e0de90-6da0-41e9-9b1d-f0fd256d010c-kube-api-access-7svpk" (OuterVolumeSpecName: "kube-api-access-7svpk") pod "83e0de90-6da0-41e9-9b1d-f0fd256d010c" (UID: "83e0de90-6da0-41e9-9b1d-f0fd256d010c"). InnerVolumeSpecName "kube-api-access-7svpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.080080 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7svpk\" (UniqueName: \"kubernetes.io/projected/83e0de90-6da0-41e9-9b1d-f0fd256d010c-kube-api-access-7svpk\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.080122 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e0de90-6da0-41e9-9b1d-f0fd256d010c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.122518 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.182917 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-operator-scripts\") pod \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\" (UID: \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\") " Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.182977 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sd6b\" (UniqueName: \"kubernetes.io/projected/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-kube-api-access-8sd6b\") pod \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\" (UID: \"7dd4a2df-971c-4ebe-9434-01c1ae4f55d8\") " Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.183553 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dd4a2df-971c-4ebe-9434-01c1ae4f55d8" (UID: "7dd4a2df-971c-4ebe-9434-01c1ae4f55d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.186558 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.186984 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-kube-api-access-8sd6b" (OuterVolumeSpecName: "kube-api-access-8sd6b") pod "7dd4a2df-971c-4ebe-9434-01c1ae4f55d8" (UID: "7dd4a2df-971c-4ebe-9434-01c1ae4f55d8"). InnerVolumeSpecName "kube-api-access-8sd6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.288801 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sd6b\" (UniqueName: \"kubernetes.io/projected/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8-kube-api-access-8sd6b\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.855936 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4f4cf82-cc11-498c-a168-ca862bfcd361","Type":"ContainerStarted","Data":"4dbd0f92a7469adfcaa10daa63b587ffc502b8a6e9c1310a84c201e71a82bd6c"} Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.857176 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dsc9t" event={"ID":"b1201af6-076e-430b-adb8-3699f6296afe","Type":"ContainerStarted","Data":"bc7f358bfe408b8cf71a0900f97e8f26680ed0a0085a778bd607254bf3be6326"} Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.857200 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-fd44-account-create-update-xsf4h" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.857219 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-92mf8" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.857344 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5cf-account-create-update-l6nx8" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.857704 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cca8-account-create-update-q9fwj" Feb 24 15:12:51 crc kubenswrapper[4982]: I0224 15:12:51.916742 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dsc9t" podStartSLOduration=3.2197763950000002 podStartE2EDuration="8.916714767s" podCreationTimestamp="2026-02-24 15:12:43 +0000 UTC" firstStartedPulling="2026-02-24 15:12:45.248826865 +0000 UTC m=+1426.867885368" lastFinishedPulling="2026-02-24 15:12:50.945765247 +0000 UTC m=+1432.564823740" observedRunningTime="2026-02-24 15:12:51.882362973 +0000 UTC m=+1433.501421466" watchObservedRunningTime="2026-02-24 15:12:51.916714767 +0000 UTC m=+1433.535773300" Feb 24 15:12:52 crc kubenswrapper[4982]: I0224 15:12:52.267647 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:12:52 crc kubenswrapper[4982]: I0224 15:12:52.339125 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jjgq9"] Feb 24 15:12:52 crc kubenswrapper[4982]: I0224 15:12:52.339775 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" podUID="612828ed-0dec-4a50-b9af-2e9e63864167" containerName="dnsmasq-dns" containerID="cri-o://e65a5b62b701be4a32fd5be5e8e37052a0d069f1465949ba316bc1b9eda32b7d" gracePeriod=10 Feb 24 15:12:52 crc kubenswrapper[4982]: I0224 15:12:52.873178 4982 generic.go:334] "Generic (PLEG): container finished" podID="612828ed-0dec-4a50-b9af-2e9e63864167" containerID="e65a5b62b701be4a32fd5be5e8e37052a0d069f1465949ba316bc1b9eda32b7d" exitCode=0 Feb 24 15:12:52 crc kubenswrapper[4982]: I0224 15:12:52.873807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" event={"ID":"612828ed-0dec-4a50-b9af-2e9e63864167","Type":"ContainerDied","Data":"e65a5b62b701be4a32fd5be5e8e37052a0d069f1465949ba316bc1b9eda32b7d"} Feb 24 15:12:52 crc kubenswrapper[4982]: I0224 15:12:52.874026 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" event={"ID":"612828ed-0dec-4a50-b9af-2e9e63864167","Type":"ContainerDied","Data":"95a8750796ab31c0a773a490f7ce2d9dc85b8214877f375d1609985a756bb4c6"} Feb 24 15:12:52 crc kubenswrapper[4982]: I0224 15:12:52.874040 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95a8750796ab31c0a773a490f7ce2d9dc85b8214877f375d1609985a756bb4c6" Feb 24 15:12:52 crc kubenswrapper[4982]: I0224 15:12:52.923658 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.047207 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-dns-svc\") pod \"612828ed-0dec-4a50-b9af-2e9e63864167\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.047274 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-nb\") pod \"612828ed-0dec-4a50-b9af-2e9e63864167\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.047311 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtcrc\" (UniqueName: \"kubernetes.io/projected/612828ed-0dec-4a50-b9af-2e9e63864167-kube-api-access-gtcrc\") pod \"612828ed-0dec-4a50-b9af-2e9e63864167\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.047474 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-sb\") pod \"612828ed-0dec-4a50-b9af-2e9e63864167\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.047534 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-config\") pod \"612828ed-0dec-4a50-b9af-2e9e63864167\" (UID: \"612828ed-0dec-4a50-b9af-2e9e63864167\") " Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.062810 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612828ed-0dec-4a50-b9af-2e9e63864167-kube-api-access-gtcrc" (OuterVolumeSpecName: "kube-api-access-gtcrc") pod "612828ed-0dec-4a50-b9af-2e9e63864167" (UID: "612828ed-0dec-4a50-b9af-2e9e63864167"). InnerVolumeSpecName "kube-api-access-gtcrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.150198 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtcrc\" (UniqueName: \"kubernetes.io/projected/612828ed-0dec-4a50-b9af-2e9e63864167-kube-api-access-gtcrc\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.196405 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "612828ed-0dec-4a50-b9af-2e9e63864167" (UID: "612828ed-0dec-4a50-b9af-2e9e63864167"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.201807 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-config" (OuterVolumeSpecName: "config") pod "612828ed-0dec-4a50-b9af-2e9e63864167" (UID: "612828ed-0dec-4a50-b9af-2e9e63864167"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.203887 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "612828ed-0dec-4a50-b9af-2e9e63864167" (UID: "612828ed-0dec-4a50-b9af-2e9e63864167"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.203942 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "612828ed-0dec-4a50-b9af-2e9e63864167" (UID: "612828ed-0dec-4a50-b9af-2e9e63864167"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.253934 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.253980 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.253993 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.254005 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/612828ed-0dec-4a50-b9af-2e9e63864167-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.885546 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-jjgq9" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.885580 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-82562" event={"ID":"1ed4027c-742c-4789-9fad-fc912c419d6d","Type":"ContainerStarted","Data":"669c10dd2fdf5afa8e3a5942437580742fe649f035edb7476f428311dc5b1681"} Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.905718 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-82562" podStartSLOduration=3.255273392 podStartE2EDuration="37.905695209s" podCreationTimestamp="2026-02-24 15:12:16 +0000 UTC" firstStartedPulling="2026-02-24 15:12:17.996543276 +0000 UTC m=+1399.615601769" lastFinishedPulling="2026-02-24 15:12:52.646965093 +0000 UTC m=+1434.266023586" observedRunningTime="2026-02-24 15:12:53.90099935 +0000 UTC m=+1435.520057863" watchObservedRunningTime="2026-02-24 15:12:53.905695209 +0000 UTC m=+1435.524753712" Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.929902 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jjgq9"] Feb 24 15:12:53 crc kubenswrapper[4982]: I0224 15:12:53.938931 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-jjgq9"] Feb 24 15:12:54 crc kubenswrapper[4982]: I0224 15:12:54.898310 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4f4cf82-cc11-498c-a168-ca862bfcd361","Type":"ContainerStarted","Data":"6ac8733e88f565c604063b4ba88c476d6895c01d83cc1edef54882f5b70d587f"} Feb 24 15:12:54 crc kubenswrapper[4982]: I0224 15:12:54.898676 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4f4cf82-cc11-498c-a168-ca862bfcd361","Type":"ContainerStarted","Data":"949c6993713ec9552619bc85f8d1bc26e9c23d0feeb9ff81720e729ee862d899"} Feb 24 15:12:54 crc kubenswrapper[4982]: I0224 15:12:54.934246 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.934228274 podStartE2EDuration="17.934228274s" podCreationTimestamp="2026-02-24 15:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:12:54.925836926 +0000 UTC m=+1436.544895439" watchObservedRunningTime="2026-02-24 15:12:54.934228274 +0000 UTC m=+1436.553286767" Feb 24 15:12:55 crc kubenswrapper[4982]: I0224 15:12:55.159139 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612828ed-0dec-4a50-b9af-2e9e63864167" path="/var/lib/kubelet/pods/612828ed-0dec-4a50-b9af-2e9e63864167/volumes" Feb 24 15:12:55 crc kubenswrapper[4982]: I0224 15:12:55.913583 4982 generic.go:334] "Generic (PLEG): container finished" podID="b1201af6-076e-430b-adb8-3699f6296afe" containerID="bc7f358bfe408b8cf71a0900f97e8f26680ed0a0085a778bd607254bf3be6326" exitCode=0 Feb 24 15:12:55 crc kubenswrapper[4982]: I0224 15:12:55.913696 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dsc9t" event={"ID":"b1201af6-076e-430b-adb8-3699f6296afe","Type":"ContainerDied","Data":"bc7f358bfe408b8cf71a0900f97e8f26680ed0a0085a778bd607254bf3be6326"} Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.349904 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.446812 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-combined-ca-bundle\") pod \"b1201af6-076e-430b-adb8-3699f6296afe\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.446878 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm2cz\" (UniqueName: \"kubernetes.io/projected/b1201af6-076e-430b-adb8-3699f6296afe-kube-api-access-mm2cz\") pod \"b1201af6-076e-430b-adb8-3699f6296afe\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.447122 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-config-data\") pod \"b1201af6-076e-430b-adb8-3699f6296afe\" (UID: \"b1201af6-076e-430b-adb8-3699f6296afe\") " Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.454764 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1201af6-076e-430b-adb8-3699f6296afe-kube-api-access-mm2cz" (OuterVolumeSpecName: "kube-api-access-mm2cz") pod "b1201af6-076e-430b-adb8-3699f6296afe" (UID: "b1201af6-076e-430b-adb8-3699f6296afe"). InnerVolumeSpecName "kube-api-access-mm2cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.496779 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1201af6-076e-430b-adb8-3699f6296afe" (UID: "b1201af6-076e-430b-adb8-3699f6296afe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.519702 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-config-data" (OuterVolumeSpecName: "config-data") pod "b1201af6-076e-430b-adb8-3699f6296afe" (UID: "b1201af6-076e-430b-adb8-3699f6296afe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.550279 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.550310 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1201af6-076e-430b-adb8-3699f6296afe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.550323 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm2cz\" (UniqueName: \"kubernetes.io/projected/b1201af6-076e-430b-adb8-3699f6296afe-kube-api-access-mm2cz\") on node \"crc\" DevicePath \"\"" Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.936720 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dsc9t" event={"ID":"b1201af6-076e-430b-adb8-3699f6296afe","Type":"ContainerDied","Data":"f46d0064a615a57483ce748941d72cb70cfc4c3621bcd5e58e4aaf018737575d"} Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.936784 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46d0064a615a57483ce748941d72cb70cfc4c3621bcd5e58e4aaf018737575d" Feb 24 15:12:57 crc kubenswrapper[4982]: I0224 15:12:57.936883 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dsc9t" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.219830 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.244450 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jwn5v"] Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246007 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21152da4-0b6c-43d5-979f-6178105a507a" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246034 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="21152da4-0b6c-43d5-979f-6178105a507a" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246065 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2de08c7-91d6-4dd1-af64-cc09f7f43e0c" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246074 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2de08c7-91d6-4dd1-af64-cc09f7f43e0c" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246085 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40519aa0-b7b2-4e5c-898c-b73365c9d8f0" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246100 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="40519aa0-b7b2-4e5c-898c-b73365c9d8f0" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246114 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa862be-6ab3-4fea-85d4-f08b59e0dbc8" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246122 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa862be-6ab3-4fea-85d4-f08b59e0dbc8" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246148 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612828ed-0dec-4a50-b9af-2e9e63864167" containerName="dnsmasq-dns" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246157 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="612828ed-0dec-4a50-b9af-2e9e63864167" containerName="dnsmasq-dns" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246194 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd4a2df-971c-4ebe-9434-01c1ae4f55d8" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246204 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd4a2df-971c-4ebe-9434-01c1ae4f55d8" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246219 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612828ed-0dec-4a50-b9af-2e9e63864167" containerName="init" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246225 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="612828ed-0dec-4a50-b9af-2e9e63864167" containerName="init" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246235 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e0de90-6da0-41e9-9b1d-f0fd256d010c" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246244 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e0de90-6da0-41e9-9b1d-f0fd256d010c" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246892 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b424dbf-bc16-404a-806e-7be5855b43c8" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246911 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b424dbf-bc16-404a-806e-7be5855b43c8" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246926 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1201af6-076e-430b-adb8-3699f6296afe" containerName="keystone-db-sync" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246935 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1201af6-076e-430b-adb8-3699f6296afe" containerName="keystone-db-sync" Feb 24 15:12:58 crc kubenswrapper[4982]: E0224 15:12:58.246951 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d299bdd-7d22-4c41-8dbf-c09b655e2705" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.246958 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d299bdd-7d22-4c41-8dbf-c09b655e2705" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247183 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d299bdd-7d22-4c41-8dbf-c09b655e2705" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247201 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd4a2df-971c-4ebe-9434-01c1ae4f55d8" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247216 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e0de90-6da0-41e9-9b1d-f0fd256d010c" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247266 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="40519aa0-b7b2-4e5c-898c-b73365c9d8f0" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247277 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="21152da4-0b6c-43d5-979f-6178105a507a" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247287 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1201af6-076e-430b-adb8-3699f6296afe" containerName="keystone-db-sync" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247301 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b424dbf-bc16-404a-806e-7be5855b43c8" containerName="mariadb-account-create-update" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247312 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa862be-6ab3-4fea-85d4-f08b59e0dbc8" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247318 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2de08c7-91d6-4dd1-af64-cc09f7f43e0c" containerName="mariadb-database-create" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.247332 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="612828ed-0dec-4a50-b9af-2e9e63864167" containerName="dnsmasq-dns" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.248921 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.272509 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8kd5n"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.274355 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.291285 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8hsbr" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.291694 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.291878 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.292025 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.292158 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.292351 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jwn5v"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.316272 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8kd5n"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.359691 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-9zp9n"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367017 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnw67\" (UniqueName: \"kubernetes.io/projected/475ae195-de59-489a-88e6-8577ee82879d-kube-api-access-pnw67\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367167 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-combined-ca-bundle\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367201 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-svc\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367233 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-config-data\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367260 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367318 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367365 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-credential-keys\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367391 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-fernet-keys\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367411 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-config\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367444 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367472 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48984\" (UniqueName: \"kubernetes.io/projected/998c1608-0fd2-413f-8b11-a6cf5a62c394-kube-api-access-48984\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.367744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-scripts\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.368054 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.375034 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.377894 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-spq7w" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.403472 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9zp9n"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.421886 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bzq42"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.423416 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.426194 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.433333 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.433615 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c7rfv" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.447215 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bzq42"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469075 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnw67\" (UniqueName: \"kubernetes.io/projected/475ae195-de59-489a-88e6-8577ee82879d-kube-api-access-pnw67\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469124 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrzs\" (UniqueName: \"kubernetes.io/projected/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-kube-api-access-9nrzs\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469164 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-combined-ca-bundle\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469184 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-svc\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469202 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-config-data\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469218 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469247 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469264 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-combined-ca-bundle\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-credential-keys\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469305 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-fernet-keys\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469322 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-config\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469346 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469364 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48984\" (UniqueName: \"kubernetes.io/projected/998c1608-0fd2-413f-8b11-a6cf5a62c394-kube-api-access-48984\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469383 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6r4\" (UniqueName: \"kubernetes.io/projected/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-kube-api-access-ms6r4\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469414 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-config-data\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469450 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-scripts\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469467 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-config\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.469509 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-combined-ca-bundle\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.474606 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-config\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.475466 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.477415 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-svc\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.477604 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.477663 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.482183 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-fernet-keys\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.483391 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-combined-ca-bundle\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.487537 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-scripts\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.507853 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7dmfp"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.509664 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.510351 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-credential-keys\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.510615 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnw67\" (UniqueName: \"kubernetes.io/projected/475ae195-de59-489a-88e6-8577ee82879d-kube-api-access-pnw67\") pod \"dnsmasq-dns-5b868669f-jwn5v\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.514228 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wtghw" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.514513 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.514856 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.521306 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7dmfp"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.532099 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48984\" (UniqueName: \"kubernetes.io/projected/998c1608-0fd2-413f-8b11-a6cf5a62c394-kube-api-access-48984\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.539263 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-config-data\") pod \"keystone-bootstrap-8kd5n\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.574781 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6r4\" (UniqueName: \"kubernetes.io/projected/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-kube-api-access-ms6r4\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.574851 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-db-sync-config-data\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.574871 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-config-data\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.574924 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdb2g\" (UniqueName: \"kubernetes.io/projected/9d6a585f-95fd-47f7-a809-cceee8a3644a-kube-api-access-tdb2g\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.574945 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-config\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.574980 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-combined-ca-bundle\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.575054 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrzs\" (UniqueName: \"kubernetes.io/projected/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-kube-api-access-9nrzs\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.575074 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-config-data\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.575102 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-scripts\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.575121 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d6a585f-95fd-47f7-a809-cceee8a3644a-etc-machine-id\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.575196 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-combined-ca-bundle\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.575252 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-combined-ca-bundle\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.583992 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.610022 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.610681 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6r4\" (UniqueName: \"kubernetes.io/projected/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-kube-api-access-ms6r4\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.613333 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-combined-ca-bundle\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.613813 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-config\") pod \"neutron-db-sync-bzq42\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.614628 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-config-data\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.618551 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrzs\" (UniqueName: \"kubernetes.io/projected/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-kube-api-access-9nrzs\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.619036 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-combined-ca-bundle\") pod \"heat-db-sync-9zp9n\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.627571 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wwkwb"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.628942 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.642936 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.643044 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vmm4w" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.643178 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.678267 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-config-data\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.678675 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d6a585f-95fd-47f7-a809-cceee8a3644a-etc-machine-id\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.678700 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-scripts\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.678787 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-combined-ca-bundle\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.678841 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-db-sync-config-data\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.678890 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdb2g\" (UniqueName: \"kubernetes.io/projected/9d6a585f-95fd-47f7-a809-cceee8a3644a-kube-api-access-tdb2g\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.681387 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jwn5v"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.683005 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d6a585f-95fd-47f7-a809-cceee8a3644a-etc-machine-id\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.684131 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-config-data\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.687124 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-combined-ca-bundle\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.692736 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-db-sync-config-data\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.704355 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9zp9n" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.714062 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-scripts\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.729015 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wwkwb"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.736554 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdb2g\" (UniqueName: \"kubernetes.io/projected/9d6a585f-95fd-47f7-a809-cceee8a3644a-kube-api-access-tdb2g\") pod \"cinder-db-sync-7dmfp\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.769037 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bzq42" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.775166 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cq29z"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.776459 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.783671 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-combined-ca-bundle\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.783740 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-scripts\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.783810 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-config-data\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.783864 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53de54-d42e-4095-95b0-df6db57c2106-logs\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.783894 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2l9r\" (UniqueName: \"kubernetes.io/projected/8d53de54-d42e-4095-95b0-df6db57c2106-kube-api-access-c2l9r\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.799822 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cq29z"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.816813 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.817089 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.817301 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n94lc" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.867714 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-k7ffj"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.886657 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.887971 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-combined-ca-bundle\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.888042 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-combined-ca-bundle\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.888081 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-scripts\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.888120 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-db-sync-config-data\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.888179 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-config-data\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.898132 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53de54-d42e-4095-95b0-df6db57c2106-logs\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.898214 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2l9r\" (UniqueName: \"kubernetes.io/projected/8d53de54-d42e-4095-95b0-df6db57c2106-kube-api-access-c2l9r\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.898275 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2hj\" (UniqueName: \"kubernetes.io/projected/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-kube-api-access-vj2hj\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.900192 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53de54-d42e-4095-95b0-df6db57c2106-logs\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.909875 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-k7ffj"] Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.994456 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-combined-ca-bundle\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.994758 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-scripts\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:58 crc kubenswrapper[4982]: I0224 15:12:58.995269 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-config-data\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.000550 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.005347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2l9r\" (UniqueName: \"kubernetes.io/projected/8d53de54-d42e-4095-95b0-df6db57c2106-kube-api-access-c2l9r\") pod \"placement-db-sync-wwkwb\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.005647 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xscwx\" (UniqueName: \"kubernetes.io/projected/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-kube-api-access-xscwx\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.005738 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj2hj\" (UniqueName: \"kubernetes.io/projected/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-kube-api-access-vj2hj\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.005809 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.005901 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-combined-ca-bundle\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.005992 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.006025 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-db-sync-config-data\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.006068 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-config\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.006184 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-svc\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.016258 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-combined-ca-bundle\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.048294 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj2hj\" (UniqueName: \"kubernetes.io/projected/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-kube-api-access-vj2hj\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.050185 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-db-sync-config-data\") pod \"barbican-db-sync-cq29z\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.065654 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.073415 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.080600 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.082088 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.096555 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131132 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131234 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131282 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-config\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131379 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfwfv\" (UniqueName: \"kubernetes.io/projected/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-kube-api-access-qfwfv\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131419 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-svc\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131441 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-log-httpd\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131467 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-config-data\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131521 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131556 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131687 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xscwx\" (UniqueName: \"kubernetes.io/projected/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-kube-api-access-xscwx\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131746 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-run-httpd\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131789 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-scripts\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.131812 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.132617 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.133206 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-config\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.133206 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.137204 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.137856 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-svc\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.152456 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xscwx\" (UniqueName: \"kubernetes.io/projected/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-kube-api-access-xscwx\") pod \"dnsmasq-dns-cf78879c9-k7ffj\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.163975 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vmm4w" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.172480 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wwkwb" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.194879 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n94lc" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.203710 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cq29z" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.243980 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-config-data\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.244032 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.244209 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-run-httpd\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.244249 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-scripts\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.244312 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.244430 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfwfv\" (UniqueName: \"kubernetes.io/projected/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-kube-api-access-qfwfv\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.244463 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-log-httpd\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.245168 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-log-httpd\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.245371 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-run-httpd\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.245963 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.254308 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-scripts\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.254541 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-config-data\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.254775 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.258115 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.277443 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfwfv\" (UniqueName: \"kubernetes.io/projected/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-kube-api-access-qfwfv\") pod \"ceilometer-0\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.408225 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jwn5v"] Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.425611 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:12:59 crc kubenswrapper[4982]: W0224 15:12:59.488639 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod475ae195_de59_489a_88e6_8577ee82879d.slice/crio-78b398047d3cf159fc40093fe5e4fd56211aa60ce8eb59de5848698156171b1e WatchSource:0}: Error finding container 78b398047d3cf159fc40093fe5e4fd56211aa60ce8eb59de5848698156171b1e: Status 404 returned error can't find the container with id 78b398047d3cf159fc40093fe5e4fd56211aa60ce8eb59de5848698156171b1e Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.692540 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bzq42"] Feb 24 15:12:59 crc kubenswrapper[4982]: W0224 15:12:59.713357 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee4cff74_7cc3_4470_8caf_32c6fbfb437b.slice/crio-c7c407eb511929d3e5ebc8f528582a3eca8d4e867b32b121a032e1667c19d1af WatchSource:0}: Error finding container c7c407eb511929d3e5ebc8f528582a3eca8d4e867b32b121a032e1667c19d1af: Status 404 returned error can't find the container with id c7c407eb511929d3e5ebc8f528582a3eca8d4e867b32b121a032e1667c19d1af Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.718338 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8kd5n"] Feb 24 15:12:59 crc kubenswrapper[4982]: I0224 15:12:59.729069 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.017008 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bzq42" event={"ID":"ee4cff74-7cc3-4470-8caf-32c6fbfb437b","Type":"ContainerStarted","Data":"360241317dd3ebde420bcbf84ed13b453b007121fa1dcb0fdef2634c061a1947"} Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.017282 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bzq42" event={"ID":"ee4cff74-7cc3-4470-8caf-32c6fbfb437b","Type":"ContainerStarted","Data":"c7c407eb511929d3e5ebc8f528582a3eca8d4e867b32b121a032e1667c19d1af"} Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.027542 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-jwn5v" event={"ID":"475ae195-de59-489a-88e6-8577ee82879d","Type":"ContainerStarted","Data":"651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293"} Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.027581 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-jwn5v" event={"ID":"475ae195-de59-489a-88e6-8577ee82879d","Type":"ContainerStarted","Data":"78b398047d3cf159fc40093fe5e4fd56211aa60ce8eb59de5848698156171b1e"} Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.031849 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8kd5n" event={"ID":"998c1608-0fd2-413f-8b11-a6cf5a62c394","Type":"ContainerStarted","Data":"f8a85a3dffdee28f1c54f7b2ee02e614c4ac7969ab7076b7516c8a1f18cb6410"} Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.031888 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8kd5n" event={"ID":"998c1608-0fd2-413f-8b11-a6cf5a62c394","Type":"ContainerStarted","Data":"cbb8d5d6b7d68d05e45d985e1d902579536600103db86a077a4a40104f3ff834"} Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.043485 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bzq42" podStartSLOduration=2.043465116 podStartE2EDuration="2.043465116s" podCreationTimestamp="2026-02-24 15:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:00.032393815 +0000 UTC m=+1441.651452308" watchObservedRunningTime="2026-02-24 15:13:00.043465116 +0000 UTC m=+1441.662523609" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.116296 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8kd5n" podStartSLOduration=2.116277686 podStartE2EDuration="2.116277686s" podCreationTimestamp="2026-02-24 15:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:00.098488743 +0000 UTC m=+1441.717547236" watchObservedRunningTime="2026-02-24 15:13:00.116277686 +0000 UTC m=+1441.735336179" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.132560 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7dmfp"] Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.171019 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cq29z"] Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.252853 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wwkwb"] Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.267564 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9zp9n"] Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.285251 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-k7ffj"] Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.333343 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.704196 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.704639 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.811380 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-svc\") pod \"475ae195-de59-489a-88e6-8577ee82879d\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.811435 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnw67\" (UniqueName: \"kubernetes.io/projected/475ae195-de59-489a-88e6-8577ee82879d-kube-api-access-pnw67\") pod \"475ae195-de59-489a-88e6-8577ee82879d\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.811711 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-nb\") pod \"475ae195-de59-489a-88e6-8577ee82879d\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.811842 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-config\") pod \"475ae195-de59-489a-88e6-8577ee82879d\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.811971 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-swift-storage-0\") pod \"475ae195-de59-489a-88e6-8577ee82879d\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.812073 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-sb\") pod \"475ae195-de59-489a-88e6-8577ee82879d\" (UID: \"475ae195-de59-489a-88e6-8577ee82879d\") " Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.841954 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "475ae195-de59-489a-88e6-8577ee82879d" (UID: "475ae195-de59-489a-88e6-8577ee82879d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.869032 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "475ae195-de59-489a-88e6-8577ee82879d" (UID: "475ae195-de59-489a-88e6-8577ee82879d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.869939 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "475ae195-de59-489a-88e6-8577ee82879d" (UID: "475ae195-de59-489a-88e6-8577ee82879d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.871612 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "475ae195-de59-489a-88e6-8577ee82879d" (UID: "475ae195-de59-489a-88e6-8577ee82879d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.873384 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-config" (OuterVolumeSpecName: "config") pod "475ae195-de59-489a-88e6-8577ee82879d" (UID: "475ae195-de59-489a-88e6-8577ee82879d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.883313 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475ae195-de59-489a-88e6-8577ee82879d-kube-api-access-pnw67" (OuterVolumeSpecName: "kube-api-access-pnw67") pod "475ae195-de59-489a-88e6-8577ee82879d" (UID: "475ae195-de59-489a-88e6-8577ee82879d"). InnerVolumeSpecName "kube-api-access-pnw67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.916176 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.916474 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.916622 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnw67\" (UniqueName: \"kubernetes.io/projected/475ae195-de59-489a-88e6-8577ee82879d-kube-api-access-pnw67\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.916713 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.916794 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:00 crc kubenswrapper[4982]: I0224 15:13:00.916871 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/475ae195-de59-489a-88e6-8577ee82879d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.046420 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9zp9n" event={"ID":"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99","Type":"ContainerStarted","Data":"814b1d17da64301b2dde6db43399f8d2ac9a66c9dcfd37022f3a6043137e6a6b"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.051691 4982 generic.go:334] "Generic (PLEG): container finished" podID="475ae195-de59-489a-88e6-8577ee82879d" containerID="651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293" exitCode=0 Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.052143 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-jwn5v" Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.052553 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-jwn5v" event={"ID":"475ae195-de59-489a-88e6-8577ee82879d","Type":"ContainerDied","Data":"651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.052594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-jwn5v" event={"ID":"475ae195-de59-489a-88e6-8577ee82879d","Type":"ContainerDied","Data":"78b398047d3cf159fc40093fe5e4fd56211aa60ce8eb59de5848698156171b1e"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.052611 4982 scope.go:117] "RemoveContainer" containerID="651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293" Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.057657 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cq29z" event={"ID":"2b24785f-aeec-4512-8f01-b0e1fc31a2b4","Type":"ContainerStarted","Data":"4fc0255cffd230ceae5df71aeabf91191e5aa67f24ed95c8ac523edba297630e"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.059622 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1","Type":"ContainerStarted","Data":"7d03941a6f2aa5e37acabe32ee50b9fef6b8f33d2394be65fd85f4958a68f015"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.062107 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wwkwb" event={"ID":"8d53de54-d42e-4095-95b0-df6db57c2106","Type":"ContainerStarted","Data":"640ec5f9ca5f170c0caac3caee9ea8bf972fcc074f0faa7632c30a0bcf69925f"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.063340 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7dmfp" event={"ID":"9d6a585f-95fd-47f7-a809-cceee8a3644a","Type":"ContainerStarted","Data":"f386edaa6748cb5ebcfeae8b2f5ba7c285a8d49e063c10fbeb2c50301af3c6ba"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.066404 4982 generic.go:334] "Generic (PLEG): container finished" podID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerID="5266529288ba1b8db3c86e52b2f94ff4322024b55d3e3c944dace38e874dcf7a" exitCode=0 Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.066605 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" event={"ID":"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582","Type":"ContainerDied","Data":"5266529288ba1b8db3c86e52b2f94ff4322024b55d3e3c944dace38e874dcf7a"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.066665 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" event={"ID":"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582","Type":"ContainerStarted","Data":"2dc01484d8268caae737c06f748fd0607d3c4ff2a888f155714f170d003512d8"} Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.103311 4982 scope.go:117] "RemoveContainer" containerID="651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293" Feb 24 15:13:01 crc kubenswrapper[4982]: E0224 15:13:01.106624 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293\": container with ID starting with 651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293 not found: ID does not exist" containerID="651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293" Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.106665 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293"} err="failed to get container status \"651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293\": rpc error: code = NotFound desc = could not find container \"651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293\": container with ID starting with 651b4d9332bb5e62a42234bc5a2dd923489ff632a22e4c8cd42f2439f5c9d293 not found: ID does not exist" Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.184393 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jwn5v"] Feb 24 15:13:01 crc kubenswrapper[4982]: I0224 15:13:01.184512 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-jwn5v"] Feb 24 15:13:02 crc kubenswrapper[4982]: I0224 15:13:02.091553 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" event={"ID":"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582","Type":"ContainerStarted","Data":"4929189ce037909806b050c77005b595c67eabda9288f1c7eecdffff5de02703"} Feb 24 15:13:02 crc kubenswrapper[4982]: I0224 15:13:02.093881 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:13:02 crc kubenswrapper[4982]: I0224 15:13:02.124641 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" podStartSLOduration=4.124621464 podStartE2EDuration="4.124621464s" podCreationTimestamp="2026-02-24 15:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:02.113942153 +0000 UTC m=+1443.733000656" watchObservedRunningTime="2026-02-24 15:13:02.124621464 +0000 UTC m=+1443.743679957" Feb 24 15:13:03 crc kubenswrapper[4982]: I0224 15:13:03.173592 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475ae195-de59-489a-88e6-8577ee82879d" path="/var/lib/kubelet/pods/475ae195-de59-489a-88e6-8577ee82879d/volumes" Feb 24 15:13:05 crc kubenswrapper[4982]: I0224 15:13:05.158308 4982 generic.go:334] "Generic (PLEG): container finished" podID="998c1608-0fd2-413f-8b11-a6cf5a62c394" containerID="f8a85a3dffdee28f1c54f7b2ee02e614c4ac7969ab7076b7516c8a1f18cb6410" exitCode=0 Feb 24 15:13:05 crc kubenswrapper[4982]: I0224 15:13:05.167157 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8kd5n" event={"ID":"998c1608-0fd2-413f-8b11-a6cf5a62c394","Type":"ContainerDied","Data":"f8a85a3dffdee28f1c54f7b2ee02e614c4ac7969ab7076b7516c8a1f18cb6410"} Feb 24 15:13:06 crc kubenswrapper[4982]: I0224 15:13:06.176433 4982 generic.go:334] "Generic (PLEG): container finished" podID="1ed4027c-742c-4789-9fad-fc912c419d6d" containerID="669c10dd2fdf5afa8e3a5942437580742fe649f035edb7476f428311dc5b1681" exitCode=0 Feb 24 15:13:06 crc kubenswrapper[4982]: I0224 15:13:06.176651 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-82562" event={"ID":"1ed4027c-742c-4789-9fad-fc912c419d6d","Type":"ContainerDied","Data":"669c10dd2fdf5afa8e3a5942437580742fe649f035edb7476f428311dc5b1681"} Feb 24 15:13:08 crc kubenswrapper[4982]: I0224 15:13:08.220303 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 24 15:13:08 crc kubenswrapper[4982]: I0224 15:13:08.227041 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 24 15:13:09 crc kubenswrapper[4982]: I0224 15:13:09.225033 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 24 15:13:09 crc kubenswrapper[4982]: I0224 15:13:09.247960 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:13:09 crc kubenswrapper[4982]: I0224 15:13:09.379531 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hdhlf"] Feb 24 15:13:09 crc kubenswrapper[4982]: I0224 15:13:09.379791 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="dnsmasq-dns" containerID="cri-o://6172f6d40939f3dfda016aad513a08117a4d14c6bf36822941ce521ad74135a2" gracePeriod=10 Feb 24 15:13:10 crc kubenswrapper[4982]: I0224 15:13:10.232550 4982 generic.go:334] "Generic (PLEG): container finished" podID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerID="6172f6d40939f3dfda016aad513a08117a4d14c6bf36822941ce521ad74135a2" exitCode=0 Feb 24 15:13:10 crc kubenswrapper[4982]: I0224 15:13:10.232596 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" event={"ID":"05fc4eea-f1dd-45dd-952c-dd01bff24a3b","Type":"ContainerDied","Data":"6172f6d40939f3dfda016aad513a08117a4d14c6bf36822941ce521ad74135a2"} Feb 24 15:13:12 crc kubenswrapper[4982]: I0224 15:13:12.267065 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.171:5353: connect: connection refused" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.429373 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.464133 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48984\" (UniqueName: \"kubernetes.io/projected/998c1608-0fd2-413f-8b11-a6cf5a62c394-kube-api-access-48984\") pod \"998c1608-0fd2-413f-8b11-a6cf5a62c394\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.464195 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-config-data\") pod \"998c1608-0fd2-413f-8b11-a6cf5a62c394\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.464379 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-credential-keys\") pod \"998c1608-0fd2-413f-8b11-a6cf5a62c394\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.464455 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-combined-ca-bundle\") pod \"998c1608-0fd2-413f-8b11-a6cf5a62c394\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.464482 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-scripts\") pod \"998c1608-0fd2-413f-8b11-a6cf5a62c394\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.464532 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-fernet-keys\") pod \"998c1608-0fd2-413f-8b11-a6cf5a62c394\" (UID: \"998c1608-0fd2-413f-8b11-a6cf5a62c394\") " Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.472195 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998c1608-0fd2-413f-8b11-a6cf5a62c394-kube-api-access-48984" (OuterVolumeSpecName: "kube-api-access-48984") pod "998c1608-0fd2-413f-8b11-a6cf5a62c394" (UID: "998c1608-0fd2-413f-8b11-a6cf5a62c394"). InnerVolumeSpecName "kube-api-access-48984". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.473315 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "998c1608-0fd2-413f-8b11-a6cf5a62c394" (UID: "998c1608-0fd2-413f-8b11-a6cf5a62c394"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.474636 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "998c1608-0fd2-413f-8b11-a6cf5a62c394" (UID: "998c1608-0fd2-413f-8b11-a6cf5a62c394"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.488769 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-scripts" (OuterVolumeSpecName: "scripts") pod "998c1608-0fd2-413f-8b11-a6cf5a62c394" (UID: "998c1608-0fd2-413f-8b11-a6cf5a62c394"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.507777 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-config-data" (OuterVolumeSpecName: "config-data") pod "998c1608-0fd2-413f-8b11-a6cf5a62c394" (UID: "998c1608-0fd2-413f-8b11-a6cf5a62c394"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.509396 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "998c1608-0fd2-413f-8b11-a6cf5a62c394" (UID: "998c1608-0fd2-413f-8b11-a6cf5a62c394"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.567240 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48984\" (UniqueName: \"kubernetes.io/projected/998c1608-0fd2-413f-8b11-a6cf5a62c394-kube-api-access-48984\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.567290 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.567306 4982 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.567318 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.567328 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:16 crc kubenswrapper[4982]: I0224 15:13:16.567339 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/998c1608-0fd2-413f-8b11-a6cf5a62c394-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:16 crc kubenswrapper[4982]: E0224 15:13:16.721765 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 24 15:13:16 crc kubenswrapper[4982]: E0224 15:13:16.722478 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nrzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-9zp9n_openstack(2295dbe7-ace5-48d8-8952-fd4c3b5ddf99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:13:16 crc kubenswrapper[4982]: E0224 15:13:16.723678 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-9zp9n" podUID="2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.056662 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.057148 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5h54ch57ch54h5f4h5f6h84h57fh78h5cbh656h647h55dhcdh5bdh689h664h576h5c4h5c7hf4h67dh98h649h5b9h66fh5b5h7fh5ffh65h644h5bfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfwfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.131968 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-82562" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.191210 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-combined-ca-bundle\") pod \"1ed4027c-742c-4789-9fad-fc912c419d6d\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.191368 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-db-sync-config-data\") pod \"1ed4027c-742c-4789-9fad-fc912c419d6d\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.191467 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-config-data\") pod \"1ed4027c-742c-4789-9fad-fc912c419d6d\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.191805 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbpzc\" (UniqueName: \"kubernetes.io/projected/1ed4027c-742c-4789-9fad-fc912c419d6d-kube-api-access-wbpzc\") pod \"1ed4027c-742c-4789-9fad-fc912c419d6d\" (UID: \"1ed4027c-742c-4789-9fad-fc912c419d6d\") " Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.196905 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1ed4027c-742c-4789-9fad-fc912c419d6d" (UID: "1ed4027c-742c-4789-9fad-fc912c419d6d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.198056 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed4027c-742c-4789-9fad-fc912c419d6d-kube-api-access-wbpzc" (OuterVolumeSpecName: "kube-api-access-wbpzc") pod "1ed4027c-742c-4789-9fad-fc912c419d6d" (UID: "1ed4027c-742c-4789-9fad-fc912c419d6d"). InnerVolumeSpecName "kube-api-access-wbpzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.243021 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ed4027c-742c-4789-9fad-fc912c419d6d" (UID: "1ed4027c-742c-4789-9fad-fc912c419d6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.268705 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-config-data" (OuterVolumeSpecName: "config-data") pod "1ed4027c-742c-4789-9fad-fc912c419d6d" (UID: "1ed4027c-742c-4789-9fad-fc912c419d6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.294815 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.294852 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.294862 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ed4027c-742c-4789-9fad-fc912c419d6d-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.294871 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbpzc\" (UniqueName: \"kubernetes.io/projected/1ed4027c-742c-4789-9fad-fc912c419d6d-kube-api-access-wbpzc\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.310115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8kd5n" event={"ID":"998c1608-0fd2-413f-8b11-a6cf5a62c394","Type":"ContainerDied","Data":"cbb8d5d6b7d68d05e45d985e1d902579536600103db86a077a4a40104f3ff834"} Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.310168 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb8d5d6b7d68d05e45d985e1d902579536600103db86a077a4a40104f3ff834" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.310254 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8kd5n" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.313485 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-82562" event={"ID":"1ed4027c-742c-4789-9fad-fc912c419d6d","Type":"ContainerDied","Data":"e533b035dadcb6b61b7ecc2e7c919b23e1592e099bf1858b5d105080c02678bb"} Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.313558 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e533b035dadcb6b61b7ecc2e7c919b23e1592e099bf1858b5d105080c02678bb" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.313605 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-82562" Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.316996 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-9zp9n" podUID="2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.608069 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8kd5n"] Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.619314 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8kd5n"] Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.729892 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s55kw"] Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.730391 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998c1608-0fd2-413f-8b11-a6cf5a62c394" containerName="keystone-bootstrap" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.730414 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="998c1608-0fd2-413f-8b11-a6cf5a62c394" containerName="keystone-bootstrap" Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.730444 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475ae195-de59-489a-88e6-8577ee82879d" containerName="init" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.730453 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="475ae195-de59-489a-88e6-8577ee82879d" containerName="init" Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.730471 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed4027c-742c-4789-9fad-fc912c419d6d" containerName="glance-db-sync" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.730482 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed4027c-742c-4789-9fad-fc912c419d6d" containerName="glance-db-sync" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.730762 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="475ae195-de59-489a-88e6-8577ee82879d" containerName="init" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.730785 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed4027c-742c-4789-9fad-fc912c419d6d" containerName="glance-db-sync" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.730814 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="998c1608-0fd2-413f-8b11-a6cf5a62c394" containerName="keystone-bootstrap" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.731646 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.737257 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.737573 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.737792 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8hsbr" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.738678 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.739395 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.747416 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s55kw"] Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.801742 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.801948 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj2hj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cq29z_openstack(2b24785f-aeec-4512-8f01-b0e1fc31a2b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:13:17 crc kubenswrapper[4982]: E0224 15:13:17.803155 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cq29z" podUID="2b24785f-aeec-4512-8f01-b0e1fc31a2b4" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.807146 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-scripts\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.807210 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-credential-keys\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.807319 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-combined-ca-bundle\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.807397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-fernet-keys\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.807645 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p926\" (UniqueName: \"kubernetes.io/projected/04b34e34-d603-4df9-a028-0169bf57fae7-kube-api-access-8p926\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.807773 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-config-data\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.909598 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-config-data\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.909730 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-scripts\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.909759 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-credential-keys\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.909789 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-combined-ca-bundle\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.909994 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-fernet-keys\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.910074 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p926\" (UniqueName: \"kubernetes.io/projected/04b34e34-d603-4df9-a028-0169bf57fae7-kube-api-access-8p926\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.914058 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-scripts\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.914315 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-combined-ca-bundle\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.914856 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-fernet-keys\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.916211 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-config-data\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.916919 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-credential-keys\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:17 crc kubenswrapper[4982]: I0224 15:13:17.929696 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p926\" (UniqueName: \"kubernetes.io/projected/04b34e34-d603-4df9-a028-0169bf57fae7-kube-api-access-8p926\") pod \"keystone-bootstrap-s55kw\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.056972 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:18 crc kubenswrapper[4982]: E0224 15:13:18.327594 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-cq29z" podUID="2b24785f-aeec-4512-8f01-b0e1fc31a2b4" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.673384 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-t9v58"] Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.682831 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.720142 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-t9v58"] Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.735998 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-config\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.736095 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.736370 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.736461 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9gh\" (UniqueName: \"kubernetes.io/projected/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-kube-api-access-bl9gh\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.736492 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.736630 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.846925 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-config\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.847086 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.847277 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.847339 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9gh\" (UniqueName: \"kubernetes.io/projected/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-kube-api-access-bl9gh\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.847372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.847435 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.848785 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.848812 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-config\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.849006 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.849788 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.850515 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:18 crc kubenswrapper[4982]: I0224 15:13:18.873818 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9gh\" (UniqueName: \"kubernetes.io/projected/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-kube-api-access-bl9gh\") pod \"dnsmasq-dns-56df8fb6b7-t9v58\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.051004 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.170799 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998c1608-0fd2-413f-8b11-a6cf5a62c394" path="/var/lib/kubelet/pods/998c1608-0fd2-413f-8b11-a6cf5a62c394/volumes" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.585363 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.587829 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.590168 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7m87m" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.590547 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.596306 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.601755 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.666028 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv599\" (UniqueName: \"kubernetes.io/projected/dab6259a-5558-4bdc-9b99-1c2ba8778593-kube-api-access-cv599\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.666110 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.666184 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.666222 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.666359 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.666383 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-logs\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.666456 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.769453 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.769594 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-logs\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.769650 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.769697 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv599\" (UniqueName: \"kubernetes.io/projected/dab6259a-5558-4bdc-9b99-1c2ba8778593-kube-api-access-cv599\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.769721 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.769795 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.769847 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.770372 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.771075 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-logs\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.774479 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.787757 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.789458 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.791979 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv599\" (UniqueName: \"kubernetes.io/projected/dab6259a-5558-4bdc-9b99-1c2ba8778593-kube-api-access-cv599\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.794590 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.795197 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e56090e32bcbf36b2e5fe0665116dc217294b29490f5f5ced3111bea03953390/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.869895 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.893571 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.896152 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.903187 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.905875 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.925373 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.977314 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.977444 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.977493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.977565 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.978050 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-logs\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.978209 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:19 crc kubenswrapper[4982]: I0224 15:13:19.978377 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6sm\" (UniqueName: \"kubernetes.io/projected/10299efa-200c-4ee5-b91f-c0708ef27fab-kube-api-access-kh6sm\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.080565 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.080643 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.080679 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.080712 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.080816 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-logs\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.080859 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.080928 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh6sm\" (UniqueName: \"kubernetes.io/projected/10299efa-200c-4ee5-b91f-c0708ef27fab-kube-api-access-kh6sm\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.081438 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.081485 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-logs\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.082697 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.082728 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70a27d34e7ae7884b230e8e34a62af8eccdb60653e302a58734e40794a2eda43/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.084623 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.085474 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-config-data\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.089938 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-scripts\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.115684 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh6sm\" (UniqueName: \"kubernetes.io/projected/10299efa-200c-4ee5-b91f-c0708ef27fab-kube-api-access-kh6sm\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.124875 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:20 crc kubenswrapper[4982]: I0224 15:13:20.245231 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:21 crc kubenswrapper[4982]: I0224 15:13:21.824457 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:21 crc kubenswrapper[4982]: I0224 15:13:21.927402 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:22 crc kubenswrapper[4982]: I0224 15:13:22.266222 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.171:5353: i/o timeout" Feb 24 15:13:27 crc kubenswrapper[4982]: I0224 15:13:27.267482 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.171:5353: i/o timeout" Feb 24 15:13:27 crc kubenswrapper[4982]: I0224 15:13:27.268493 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:13:29 crc kubenswrapper[4982]: I0224 15:13:29.158239 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.140409 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.243304 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-config\") pod \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.243418 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jvhz\" (UniqueName: \"kubernetes.io/projected/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-kube-api-access-5jvhz\") pod \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.243520 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-svc\") pod \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.243622 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-sb\") pod \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.243653 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-nb\") pod \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.243831 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-swift-storage-0\") pod \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\" (UID: \"05fc4eea-f1dd-45dd-952c-dd01bff24a3b\") " Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.263382 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-kube-api-access-5jvhz" (OuterVolumeSpecName: "kube-api-access-5jvhz") pod "05fc4eea-f1dd-45dd-952c-dd01bff24a3b" (UID: "05fc4eea-f1dd-45dd-952c-dd01bff24a3b"). InnerVolumeSpecName "kube-api-access-5jvhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.316183 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05fc4eea-f1dd-45dd-952c-dd01bff24a3b" (UID: "05fc4eea-f1dd-45dd-952c-dd01bff24a3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.325597 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "05fc4eea-f1dd-45dd-952c-dd01bff24a3b" (UID: "05fc4eea-f1dd-45dd-952c-dd01bff24a3b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.328289 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-config" (OuterVolumeSpecName: "config") pod "05fc4eea-f1dd-45dd-952c-dd01bff24a3b" (UID: "05fc4eea-f1dd-45dd-952c-dd01bff24a3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.328454 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05fc4eea-f1dd-45dd-952c-dd01bff24a3b" (UID: "05fc4eea-f1dd-45dd-952c-dd01bff24a3b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.328865 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05fc4eea-f1dd-45dd-952c-dd01bff24a3b" (UID: "05fc4eea-f1dd-45dd-952c-dd01bff24a3b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.348704 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.348747 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.348762 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.348774 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.348791 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jvhz\" (UniqueName: \"kubernetes.io/projected/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-kube-api-access-5jvhz\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.348804 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fc4eea-f1dd-45dd-952c-dd01bff24a3b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.468956 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" event={"ID":"05fc4eea-f1dd-45dd-952c-dd01bff24a3b","Type":"ContainerDied","Data":"89e58ff813e15709c17b2902fbab99d5467ee19c8dff7918ff997ccc25b4cba2"} Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.469316 4982 scope.go:117] "RemoveContainer" containerID="6172f6d40939f3dfda016aad513a08117a4d14c6bf36822941ce521ad74135a2" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.469237 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.511801 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hdhlf"] Feb 24 15:13:30 crc kubenswrapper[4982]: I0224 15:13:30.522481 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hdhlf"] Feb 24 15:13:31 crc kubenswrapper[4982]: I0224 15:13:31.158360 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" path="/var/lib/kubelet/pods/05fc4eea-f1dd-45dd-952c-dd01bff24a3b/volumes" Feb 24 15:13:31 crc kubenswrapper[4982]: E0224 15:13:31.339831 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 24 15:13:31 crc kubenswrapper[4982]: E0224 15:13:31.340356 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdb2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7dmfp_openstack(9d6a585f-95fd-47f7-a809-cceee8a3644a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:13:31 crc kubenswrapper[4982]: E0224 15:13:31.341556 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7dmfp" podUID="9d6a585f-95fd-47f7-a809-cceee8a3644a" Feb 24 15:13:31 crc kubenswrapper[4982]: I0224 15:13:31.379586 4982 scope.go:117] "RemoveContainer" containerID="968d4188192e031f40dd2366a95130a7924ee572be0dd4cb10d663b47855b9f5" Feb 24 15:13:31 crc kubenswrapper[4982]: E0224 15:13:31.494377 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7dmfp" podUID="9d6a585f-95fd-47f7-a809-cceee8a3644a" Feb 24 15:13:31 crc kubenswrapper[4982]: I0224 15:13:31.955765 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-t9v58"] Feb 24 15:13:32 crc kubenswrapper[4982]: W0224 15:13:32.002330 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61f91d78_8f5e_46ec_9dc6_df98ae71f8ae.slice/crio-55e3efde61002e7aebe858d5aec3beadbb55b769e508764ff355b8f8591b3a30 WatchSource:0}: Error finding container 55e3efde61002e7aebe858d5aec3beadbb55b769e508764ff355b8f8591b3a30: Status 404 returned error can't find the container with id 55e3efde61002e7aebe858d5aec3beadbb55b769e508764ff355b8f8591b3a30 Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.273768 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-hdhlf" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.171:5353: i/o timeout" Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.420393 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:32 crc kubenswrapper[4982]: W0224 15:13:32.473098 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10299efa_200c_4ee5_b91f_c0708ef27fab.slice/crio-c75810453efa93d13e514a9e67bf610989869da473c977c5b9ac173b35c5e3f6 WatchSource:0}: Error finding container c75810453efa93d13e514a9e67bf610989869da473c977c5b9ac173b35c5e3f6: Status 404 returned error can't find the container with id c75810453efa93d13e514a9e67bf610989869da473c977c5b9ac173b35c5e3f6 Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.479177 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s55kw"] Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.516746 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1","Type":"ContainerStarted","Data":"989a0da191dd8d7d31472c1820c66f5063e71ee1d394f41d90252bd2dd9703ac"} Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.520541 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wwkwb" event={"ID":"8d53de54-d42e-4095-95b0-df6db57c2106","Type":"ContainerStarted","Data":"fe188739f98702b7c44f5de1b62011c6c9b35ab29531dbf95df0ac457d45e018"} Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.523290 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s55kw" event={"ID":"04b34e34-d603-4df9-a028-0169bf57fae7","Type":"ContainerStarted","Data":"cf0bbb7b33e8b30b5f94781f577eb1208bd03f6ea2236fc69fc95429ed43843a"} Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.525125 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9zp9n" event={"ID":"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99","Type":"ContainerStarted","Data":"a49cceabac4005ea9fcba0f3ac98dee484269c7940d08c44e9fbc6c1e2495985"} Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.526864 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"10299efa-200c-4ee5-b91f-c0708ef27fab","Type":"ContainerStarted","Data":"c75810453efa93d13e514a9e67bf610989869da473c977c5b9ac173b35c5e3f6"} Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.541980 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cq29z" event={"ID":"2b24785f-aeec-4512-8f01-b0e1fc31a2b4","Type":"ContainerStarted","Data":"a28fa4c1c14ee8ba67d786e0aa16a7fc209bc0822cbefff9c60a9acc2b2ab21d"} Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.564685 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wwkwb" podStartSLOduration=3.499544308 podStartE2EDuration="34.564667289s" podCreationTimestamp="2026-02-24 15:12:58 +0000 UTC" firstStartedPulling="2026-02-24 15:13:00.221419625 +0000 UTC m=+1441.840478108" lastFinishedPulling="2026-02-24 15:13:31.286542596 +0000 UTC m=+1472.905601089" observedRunningTime="2026-02-24 15:13:32.547295207 +0000 UTC m=+1474.166353700" watchObservedRunningTime="2026-02-24 15:13:32.564667289 +0000 UTC m=+1474.183725772" Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.574063 4982 generic.go:334] "Generic (PLEG): container finished" podID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerID="8c0d91fb2cd8cfdb7af6bf9b0605eb2bde763475194cc60e01970098f7579267" exitCode=0 Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.574101 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" event={"ID":"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae","Type":"ContainerDied","Data":"8c0d91fb2cd8cfdb7af6bf9b0605eb2bde763475194cc60e01970098f7579267"} Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.574194 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" event={"ID":"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae","Type":"ContainerStarted","Data":"55e3efde61002e7aebe858d5aec3beadbb55b769e508764ff355b8f8591b3a30"} Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.597296 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-9zp9n" podStartSLOduration=2.856719871 podStartE2EDuration="34.597270846s" podCreationTimestamp="2026-02-24 15:12:58 +0000 UTC" firstStartedPulling="2026-02-24 15:13:00.257069875 +0000 UTC m=+1441.876128368" lastFinishedPulling="2026-02-24 15:13:31.99762085 +0000 UTC m=+1473.616679343" observedRunningTime="2026-02-24 15:13:32.570922529 +0000 UTC m=+1474.189981022" watchObservedRunningTime="2026-02-24 15:13:32.597270846 +0000 UTC m=+1474.216329339" Feb 24 15:13:32 crc kubenswrapper[4982]: I0224 15:13:32.656673 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cq29z" podStartSLOduration=2.930735262 podStartE2EDuration="34.65665095s" podCreationTimestamp="2026-02-24 15:12:58 +0000 UTC" firstStartedPulling="2026-02-24 15:13:00.20686919 +0000 UTC m=+1441.825927683" lastFinishedPulling="2026-02-24 15:13:31.932784878 +0000 UTC m=+1473.551843371" observedRunningTime="2026-02-24 15:13:32.606302011 +0000 UTC m=+1474.225360524" watchObservedRunningTime="2026-02-24 15:13:32.65665095 +0000 UTC m=+1474.275709453" Feb 24 15:13:33 crc kubenswrapper[4982]: I0224 15:13:33.056814 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:33 crc kubenswrapper[4982]: W0224 15:13:33.079811 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddab6259a_5558_4bdc_9b99_1c2ba8778593.slice/crio-1e7beeb03cb560b74330a03b70b1b6afadd0ae1149305261dd09d912ea7dba15 WatchSource:0}: Error finding container 1e7beeb03cb560b74330a03b70b1b6afadd0ae1149305261dd09d912ea7dba15: Status 404 returned error can't find the container with id 1e7beeb03cb560b74330a03b70b1b6afadd0ae1149305261dd09d912ea7dba15 Feb 24 15:13:33 crc kubenswrapper[4982]: I0224 15:13:33.613387 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"10299efa-200c-4ee5-b91f-c0708ef27fab","Type":"ContainerStarted","Data":"f7ff05a49081d4d89ec4e9b663ad33e76e2b736c1850e05f5036dad5286f6264"} Feb 24 15:13:33 crc kubenswrapper[4982]: I0224 15:13:33.617316 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee4cff74-7cc3-4470-8caf-32c6fbfb437b" containerID="360241317dd3ebde420bcbf84ed13b453b007121fa1dcb0fdef2634c061a1947" exitCode=0 Feb 24 15:13:33 crc kubenswrapper[4982]: I0224 15:13:33.617391 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bzq42" event={"ID":"ee4cff74-7cc3-4470-8caf-32c6fbfb437b","Type":"ContainerDied","Data":"360241317dd3ebde420bcbf84ed13b453b007121fa1dcb0fdef2634c061a1947"} Feb 24 15:13:33 crc kubenswrapper[4982]: I0224 15:13:33.619979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab6259a-5558-4bdc-9b99-1c2ba8778593","Type":"ContainerStarted","Data":"1e7beeb03cb560b74330a03b70b1b6afadd0ae1149305261dd09d912ea7dba15"} Feb 24 15:13:33 crc kubenswrapper[4982]: I0224 15:13:33.628867 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s55kw" event={"ID":"04b34e34-d603-4df9-a028-0169bf57fae7","Type":"ContainerStarted","Data":"16931269a19bccd00d82c8a5775b9ad1f391fd9ffd7950cbc51625588817cf81"} Feb 24 15:13:33 crc kubenswrapper[4982]: I0224 15:13:33.700208 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s55kw" podStartSLOduration=16.700185434 podStartE2EDuration="16.700185434s" podCreationTimestamp="2026-02-24 15:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:33.687778366 +0000 UTC m=+1475.306836859" watchObservedRunningTime="2026-02-24 15:13:33.700185434 +0000 UTC m=+1475.319243927" Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.642058 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" event={"ID":"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae","Type":"ContainerStarted","Data":"280f2d154ff9dc4c2e21a24df558fcfa39d02ad9fedb5bf9d465f5514bdfc7cf"} Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.644298 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.645418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab6259a-5558-4bdc-9b99-1c2ba8778593","Type":"ContainerStarted","Data":"1ebc174c4aa18925ef9c2a537cf33f2a9e4b83a088d8f92794fd4fe7ea09f349"} Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.645471 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab6259a-5558-4bdc-9b99-1c2ba8778593","Type":"ContainerStarted","Data":"d9485b817c09e03b97c9b208ac5ca680f9578bd551a39647700aa5e5e4baa3d1"} Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.649648 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerName="glance-log" containerID="cri-o://f7ff05a49081d4d89ec4e9b663ad33e76e2b736c1850e05f5036dad5286f6264" gracePeriod=30 Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.649869 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"10299efa-200c-4ee5-b91f-c0708ef27fab","Type":"ContainerStarted","Data":"3a081aa8aeca37d0dc99790dbbd23227ca09341bd5b13ae8a9fda501c5852f21"} Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.650268 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerName="glance-httpd" containerID="cri-o://3a081aa8aeca37d0dc99790dbbd23227ca09341bd5b13ae8a9fda501c5852f21" gracePeriod=30 Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.675668 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" podStartSLOduration=16.675645087 podStartE2EDuration="16.675645087s" podCreationTimestamp="2026-02-24 15:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:34.669822049 +0000 UTC m=+1476.288880552" watchObservedRunningTime="2026-02-24 15:13:34.675645087 +0000 UTC m=+1476.294703590" Feb 24 15:13:34 crc kubenswrapper[4982]: I0224 15:13:34.717448 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.717419863 podStartE2EDuration="16.717419863s" podCreationTimestamp="2026-02-24 15:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:34.693235925 +0000 UTC m=+1476.312294418" watchObservedRunningTime="2026-02-24 15:13:34.717419863 +0000 UTC m=+1476.336478366" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.329990 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bzq42" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.395001 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-combined-ca-bundle\") pod \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.395197 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-config\") pod \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.395275 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms6r4\" (UniqueName: \"kubernetes.io/projected/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-kube-api-access-ms6r4\") pod \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\" (UID: \"ee4cff74-7cc3-4470-8caf-32c6fbfb437b\") " Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.412879 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-kube-api-access-ms6r4" (OuterVolumeSpecName: "kube-api-access-ms6r4") pod "ee4cff74-7cc3-4470-8caf-32c6fbfb437b" (UID: "ee4cff74-7cc3-4470-8caf-32c6fbfb437b"). InnerVolumeSpecName "kube-api-access-ms6r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.436595 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-config" (OuterVolumeSpecName: "config") pod "ee4cff74-7cc3-4470-8caf-32c6fbfb437b" (UID: "ee4cff74-7cc3-4470-8caf-32c6fbfb437b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.450922 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee4cff74-7cc3-4470-8caf-32c6fbfb437b" (UID: "ee4cff74-7cc3-4470-8caf-32c6fbfb437b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.498122 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.498162 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms6r4\" (UniqueName: \"kubernetes.io/projected/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-kube-api-access-ms6r4\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.498173 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4cff74-7cc3-4470-8caf-32c6fbfb437b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.682984 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bzq42" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.683402 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bzq42" event={"ID":"ee4cff74-7cc3-4470-8caf-32c6fbfb437b","Type":"ContainerDied","Data":"c7c407eb511929d3e5ebc8f528582a3eca8d4e867b32b121a032e1667c19d1af"} Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.683726 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7c407eb511929d3e5ebc8f528582a3eca8d4e867b32b121a032e1667c19d1af" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.688763 4982 generic.go:334] "Generic (PLEG): container finished" podID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerID="3a081aa8aeca37d0dc99790dbbd23227ca09341bd5b13ae8a9fda501c5852f21" exitCode=0 Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.688802 4982 generic.go:334] "Generic (PLEG): container finished" podID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerID="f7ff05a49081d4d89ec4e9b663ad33e76e2b736c1850e05f5036dad5286f6264" exitCode=143 Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.689356 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"10299efa-200c-4ee5-b91f-c0708ef27fab","Type":"ContainerDied","Data":"3a081aa8aeca37d0dc99790dbbd23227ca09341bd5b13ae8a9fda501c5852f21"} Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.689430 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"10299efa-200c-4ee5-b91f-c0708ef27fab","Type":"ContainerDied","Data":"f7ff05a49081d4d89ec4e9b663ad33e76e2b736c1850e05f5036dad5286f6264"} Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.689697 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerName="glance-log" containerID="cri-o://d9485b817c09e03b97c9b208ac5ca680f9578bd551a39647700aa5e5e4baa3d1" gracePeriod=30 Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.689979 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerName="glance-httpd" containerID="cri-o://1ebc174c4aa18925ef9c2a537cf33f2a9e4b83a088d8f92794fd4fe7ea09f349" gracePeriod=30 Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.802466 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.802444855 podStartE2EDuration="17.802444855s" podCreationTimestamp="2026-02-24 15:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:35.770100116 +0000 UTC m=+1477.389158609" watchObservedRunningTime="2026-02-24 15:13:35.802444855 +0000 UTC m=+1477.421503348" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.927593 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-t9v58"] Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.992974 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6r9d"] Feb 24 15:13:35 crc kubenswrapper[4982]: E0224 15:13:35.994167 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="init" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.994194 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="init" Feb 24 15:13:35 crc kubenswrapper[4982]: E0224 15:13:35.994224 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="dnsmasq-dns" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.994234 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="dnsmasq-dns" Feb 24 15:13:35 crc kubenswrapper[4982]: E0224 15:13:35.994283 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4cff74-7cc3-4470-8caf-32c6fbfb437b" containerName="neutron-db-sync" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.994291 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4cff74-7cc3-4470-8caf-32c6fbfb437b" containerName="neutron-db-sync" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.995617 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4cff74-7cc3-4470-8caf-32c6fbfb437b" containerName="neutron-db-sync" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.995661 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fc4eea-f1dd-45dd-952c-dd01bff24a3b" containerName="dnsmasq-dns" Feb 24 15:13:35 crc kubenswrapper[4982]: I0224 15:13:35.997093 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.032921 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6r9d"] Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.074273 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75468f4444-6cfm8"] Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.078661 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.088009 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.088273 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.088480 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c7rfv" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.089195 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.099618 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75468f4444-6cfm8"] Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136572 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-ovndb-tls-certs\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136631 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136672 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzbp\" (UniqueName: \"kubernetes.io/projected/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-kube-api-access-6zzbp\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136692 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tghm6\" (UniqueName: \"kubernetes.io/projected/1f5da815-46e1-4224-bd34-feb1cdb54446-kube-api-access-tghm6\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136729 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-config\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136754 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-combined-ca-bundle\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136775 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136839 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136864 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136939 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-httpd-config\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.136961 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-config\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.239974 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-config\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240045 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-ovndb-tls-certs\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240080 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240114 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zzbp\" (UniqueName: \"kubernetes.io/projected/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-kube-api-access-6zzbp\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240133 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tghm6\" (UniqueName: \"kubernetes.io/projected/1f5da815-46e1-4224-bd34-feb1cdb54446-kube-api-access-tghm6\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240166 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-config\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240187 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-combined-ca-bundle\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240214 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240274 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240301 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.240379 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-httpd-config\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.241889 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.241991 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-config\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.250035 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.250805 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.251316 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.253428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-httpd-config\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.254856 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-config\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.255633 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-combined-ca-bundle\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.279724 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-ovndb-tls-certs\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.289359 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tghm6\" (UniqueName: \"kubernetes.io/projected/1f5da815-46e1-4224-bd34-feb1cdb54446-kube-api-access-tghm6\") pod \"neutron-75468f4444-6cfm8\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.295425 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zzbp\" (UniqueName: \"kubernetes.io/projected/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-kube-api-access-6zzbp\") pod \"dnsmasq-dns-6b7b667979-d6r9d\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.348065 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.460778 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.706713 4982 generic.go:334] "Generic (PLEG): container finished" podID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerID="d9485b817c09e03b97c9b208ac5ca680f9578bd551a39647700aa5e5e4baa3d1" exitCode=143 Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.706802 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab6259a-5558-4bdc-9b99-1c2ba8778593","Type":"ContainerDied","Data":"d9485b817c09e03b97c9b208ac5ca680f9578bd551a39647700aa5e5e4baa3d1"} Feb 24 15:13:36 crc kubenswrapper[4982]: I0224 15:13:36.706924 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" podUID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerName="dnsmasq-dns" containerID="cri-o://280f2d154ff9dc4c2e21a24df558fcfa39d02ad9fedb5bf9d465f5514bdfc7cf" gracePeriod=10 Feb 24 15:13:37 crc kubenswrapper[4982]: I0224 15:13:37.725473 4982 generic.go:334] "Generic (PLEG): container finished" podID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerID="1ebc174c4aa18925ef9c2a537cf33f2a9e4b83a088d8f92794fd4fe7ea09f349" exitCode=0 Feb 24 15:13:37 crc kubenswrapper[4982]: I0224 15:13:37.725540 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab6259a-5558-4bdc-9b99-1c2ba8778593","Type":"ContainerDied","Data":"1ebc174c4aa18925ef9c2a537cf33f2a9e4b83a088d8f92794fd4fe7ea09f349"} Feb 24 15:13:37 crc kubenswrapper[4982]: I0224 15:13:37.733489 4982 generic.go:334] "Generic (PLEG): container finished" podID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerID="280f2d154ff9dc4c2e21a24df558fcfa39d02ad9fedb5bf9d465f5514bdfc7cf" exitCode=0 Feb 24 15:13:37 crc kubenswrapper[4982]: I0224 15:13:37.733552 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" event={"ID":"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae","Type":"ContainerDied","Data":"280f2d154ff9dc4c2e21a24df558fcfa39d02ad9fedb5bf9d465f5514bdfc7cf"} Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.189601 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-65698f94df-gbcsr"] Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.193641 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.195581 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.195975 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.206322 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-internal-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.206408 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-public-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.206475 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-combined-ca-bundle\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.206515 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-ovndb-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.206594 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qll7t\" (UniqueName: \"kubernetes.io/projected/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-kube-api-access-qll7t\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.206706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-httpd-config\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.206741 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-config\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.223793 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65698f94df-gbcsr"] Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.309618 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qll7t\" (UniqueName: \"kubernetes.io/projected/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-kube-api-access-qll7t\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.309762 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-httpd-config\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.309793 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-config\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.309858 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-internal-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.309905 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-public-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.309948 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-combined-ca-bundle\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.309963 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-ovndb-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.316454 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-public-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.316731 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-config\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.316960 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-internal-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.317166 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-httpd-config\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.318372 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-ovndb-tls-certs\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.318673 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-combined-ca-bundle\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.332864 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qll7t\" (UniqueName: \"kubernetes.io/projected/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-kube-api-access-qll7t\") pod \"neutron-65698f94df-gbcsr\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.517806 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.712965 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.717314 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-logs\") pod \"10299efa-200c-4ee5-b91f-c0708ef27fab\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.717433 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-config-data\") pod \"10299efa-200c-4ee5-b91f-c0708ef27fab\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.717481 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-scripts\") pod \"10299efa-200c-4ee5-b91f-c0708ef27fab\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.717569 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-httpd-run\") pod \"10299efa-200c-4ee5-b91f-c0708ef27fab\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.717590 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-combined-ca-bundle\") pod \"10299efa-200c-4ee5-b91f-c0708ef27fab\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.717800 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"10299efa-200c-4ee5-b91f-c0708ef27fab\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.717890 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh6sm\" (UniqueName: \"kubernetes.io/projected/10299efa-200c-4ee5-b91f-c0708ef27fab-kube-api-access-kh6sm\") pod \"10299efa-200c-4ee5-b91f-c0708ef27fab\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.722003 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-logs" (OuterVolumeSpecName: "logs") pod "10299efa-200c-4ee5-b91f-c0708ef27fab" (UID: "10299efa-200c-4ee5-b91f-c0708ef27fab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.722978 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "10299efa-200c-4ee5-b91f-c0708ef27fab" (UID: "10299efa-200c-4ee5-b91f-c0708ef27fab"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.749683 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-scripts" (OuterVolumeSpecName: "scripts") pod "10299efa-200c-4ee5-b91f-c0708ef27fab" (UID: "10299efa-200c-4ee5-b91f-c0708ef27fab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.749787 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.749823 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.750648 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10299efa-200c-4ee5-b91f-c0708ef27fab-kube-api-access-kh6sm" (OuterVolumeSpecName: "kube-api-access-kh6sm") pod "10299efa-200c-4ee5-b91f-c0708ef27fab" (UID: "10299efa-200c-4ee5-b91f-c0708ef27fab"). InnerVolumeSpecName "kube-api-access-kh6sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.839619 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10299efa-200c-4ee5-b91f-c0708ef27fab" (UID: "10299efa-200c-4ee5-b91f-c0708ef27fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.840473 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.840529 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh6sm\" (UniqueName: \"kubernetes.io/projected/10299efa-200c-4ee5-b91f-c0708ef27fab-kube-api-access-kh6sm\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.840555 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10299efa-200c-4ee5-b91f-c0708ef27fab-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.840566 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:38 crc kubenswrapper[4982]: I0224 15:13:38.945608 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:38.999343 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-config-data" (OuterVolumeSpecName: "config-data") pod "10299efa-200c-4ee5-b91f-c0708ef27fab" (UID: "10299efa-200c-4ee5-b91f-c0708ef27fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:39 crc kubenswrapper[4982]: E0224 15:13:39.012175 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee podName:10299efa-200c-4ee5-b91f-c0708ef27fab nodeName:}" failed. No retries permitted until 2026-02-24 15:13:39.512147988 +0000 UTC m=+1481.131206481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "glance" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee") pod "10299efa-200c-4ee5-b91f-c0708ef27fab" (UID: "10299efa-200c-4ee5-b91f-c0708ef27fab") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.012542 4982 generic.go:334] "Generic (PLEG): container finished" podID="8d53de54-d42e-4095-95b0-df6db57c2106" containerID="fe188739f98702b7c44f5de1b62011c6c9b35ab29531dbf95df0ac457d45e018" exitCode=0 Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.012615 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wwkwb" event={"ID":"8d53de54-d42e-4095-95b0-df6db57c2106","Type":"ContainerDied","Data":"fe188739f98702b7c44f5de1b62011c6c9b35ab29531dbf95df0ac457d45e018"} Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.016371 4982 generic.go:334] "Generic (PLEG): container finished" podID="04b34e34-d603-4df9-a028-0169bf57fae7" containerID="16931269a19bccd00d82c8a5775b9ad1f391fd9ffd7950cbc51625588817cf81" exitCode=0 Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.016450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s55kw" event={"ID":"04b34e34-d603-4df9-a028-0169bf57fae7","Type":"ContainerDied","Data":"16931269a19bccd00d82c8a5775b9ad1f391fd9ffd7950cbc51625588817cf81"} Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.022044 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"10299efa-200c-4ee5-b91f-c0708ef27fab","Type":"ContainerDied","Data":"c75810453efa93d13e514a9e67bf610989869da473c977c5b9ac173b35c5e3f6"} Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.022089 4982 scope.go:117] "RemoveContainer" containerID="3a081aa8aeca37d0dc99790dbbd23227ca09341bd5b13ae8a9fda501c5852f21" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.022229 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.049173 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10299efa-200c-4ee5-b91f-c0708ef27fab-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.563229 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"10299efa-200c-4ee5-b91f-c0708ef27fab\" (UID: \"10299efa-200c-4ee5-b91f-c0708ef27fab\") " Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.588421 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee" (OuterVolumeSpecName: "glance") pod "10299efa-200c-4ee5-b91f-c0708ef27fab" (UID: "10299efa-200c-4ee5-b91f-c0708ef27fab"). InnerVolumeSpecName "pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.667079 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") on node \"crc\" " Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.669937 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.689476 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.707523 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:39 crc kubenswrapper[4982]: E0224 15:13:39.708099 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerName="glance-httpd" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.708127 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerName="glance-httpd" Feb 24 15:13:39 crc kubenswrapper[4982]: E0224 15:13:39.708155 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerName="glance-log" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.708163 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerName="glance-log" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.708657 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerName="glance-httpd" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.708676 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" containerName="glance-log" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.709952 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.712934 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.713060 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.721905 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.731000 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.745810 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee") on node "crc" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.875445 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.875614 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.875665 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.875746 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.875832 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtxrn\" (UniqueName: \"kubernetes.io/projected/ee0ec3ad-df19-4e19-a288-d6ca32779160-kube-api-access-xtxrn\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.875967 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.876031 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.876204 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.880008 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.880062 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70a27d34e7ae7884b230e8e34a62af8eccdb60653e302a58734e40794a2eda43/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.932401 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.980557 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.980718 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.982030 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.982133 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.982296 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtxrn\" (UniqueName: \"kubernetes.io/projected/ee0ec3ad-df19-4e19-a288-d6ca32779160-kube-api-access-xtxrn\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.982476 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.982597 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.984066 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.984234 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.988251 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.988473 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.988933 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:39 crc kubenswrapper[4982]: I0224 15:13:39.989440 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:40 crc kubenswrapper[4982]: I0224 15:13:40.006106 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtxrn\" (UniqueName: \"kubernetes.io/projected/ee0ec3ad-df19-4e19-a288-d6ca32779160-kube-api-access-xtxrn\") pod \"glance-default-internal-api-0\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:13:40 crc kubenswrapper[4982]: I0224 15:13:40.040373 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:41 crc kubenswrapper[4982]: I0224 15:13:41.057719 4982 generic.go:334] "Generic (PLEG): container finished" podID="2b24785f-aeec-4512-8f01-b0e1fc31a2b4" containerID="a28fa4c1c14ee8ba67d786e0aa16a7fc209bc0822cbefff9c60a9acc2b2ab21d" exitCode=0 Feb 24 15:13:41 crc kubenswrapper[4982]: I0224 15:13:41.057986 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cq29z" event={"ID":"2b24785f-aeec-4512-8f01-b0e1fc31a2b4","Type":"ContainerDied","Data":"a28fa4c1c14ee8ba67d786e0aa16a7fc209bc0822cbefff9c60a9acc2b2ab21d"} Feb 24 15:13:41 crc kubenswrapper[4982]: I0224 15:13:41.165027 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10299efa-200c-4ee5-b91f-c0708ef27fab" path="/var/lib/kubelet/pods/10299efa-200c-4ee5-b91f-c0708ef27fab/volumes" Feb 24 15:13:42 crc kubenswrapper[4982]: I0224 15:13:42.868825 4982 scope.go:117] "RemoveContainer" containerID="f7ff05a49081d4d89ec4e9b663ad33e76e2b736c1850e05f5036dad5286f6264" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.107374 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cq29z" event={"ID":"2b24785f-aeec-4512-8f01-b0e1fc31a2b4","Type":"ContainerDied","Data":"4fc0255cffd230ceae5df71aeabf91191e5aa67f24ed95c8ac523edba297630e"} Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.107780 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fc0255cffd230ceae5df71aeabf91191e5aa67f24ed95c8ac523edba297630e" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.114985 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" event={"ID":"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae","Type":"ContainerDied","Data":"55e3efde61002e7aebe858d5aec3beadbb55b769e508764ff355b8f8591b3a30"} Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.115023 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55e3efde61002e7aebe858d5aec3beadbb55b769e508764ff355b8f8591b3a30" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.121854 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab6259a-5558-4bdc-9b99-1c2ba8778593","Type":"ContainerDied","Data":"1e7beeb03cb560b74330a03b70b1b6afadd0ae1149305261dd09d912ea7dba15"} Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.121893 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7beeb03cb560b74330a03b70b1b6afadd0ae1149305261dd09d912ea7dba15" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.123823 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wwkwb" event={"ID":"8d53de54-d42e-4095-95b0-df6db57c2106","Type":"ContainerDied","Data":"640ec5f9ca5f170c0caac3caee9ea8bf972fcc074f0faa7632c30a0bcf69925f"} Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.123853 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="640ec5f9ca5f170c0caac3caee9ea8bf972fcc074f0faa7632c30a0bcf69925f" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.125881 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s55kw" event={"ID":"04b34e34-d603-4df9-a028-0169bf57fae7","Type":"ContainerDied","Data":"cf0bbb7b33e8b30b5f94781f577eb1208bd03f6ea2236fc69fc95429ed43843a"} Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.125911 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf0bbb7b33e8b30b5f94781f577eb1208bd03f6ea2236fc69fc95429ed43843a" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.128262 4982 generic.go:334] "Generic (PLEG): container finished" podID="2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" containerID="a49cceabac4005ea9fcba0f3ac98dee484269c7940d08c44e9fbc6c1e2495985" exitCode=0 Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.128296 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9zp9n" event={"ID":"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99","Type":"ContainerDied","Data":"a49cceabac4005ea9fcba0f3ac98dee484269c7940d08c44e9fbc6c1e2495985"} Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.187396 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.208048 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-combined-ca-bundle\") pod \"04b34e34-d603-4df9-a028-0169bf57fae7\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.208352 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-fernet-keys\") pod \"04b34e34-d603-4df9-a028-0169bf57fae7\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.208476 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-config-data\") pod \"04b34e34-d603-4df9-a028-0169bf57fae7\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.208536 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-credential-keys\") pod \"04b34e34-d603-4df9-a028-0169bf57fae7\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.208565 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p926\" (UniqueName: \"kubernetes.io/projected/04b34e34-d603-4df9-a028-0169bf57fae7-kube-api-access-8p926\") pod \"04b34e34-d603-4df9-a028-0169bf57fae7\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.208595 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-scripts\") pod \"04b34e34-d603-4df9-a028-0169bf57fae7\" (UID: \"04b34e34-d603-4df9-a028-0169bf57fae7\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.234192 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.234620 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.254693 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "04b34e34-d603-4df9-a028-0169bf57fae7" (UID: "04b34e34-d603-4df9-a028-0169bf57fae7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.254713 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "04b34e34-d603-4df9-a028-0169bf57fae7" (UID: "04b34e34-d603-4df9-a028-0169bf57fae7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.254864 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b34e34-d603-4df9-a028-0169bf57fae7-kube-api-access-8p926" (OuterVolumeSpecName: "kube-api-access-8p926") pod "04b34e34-d603-4df9-a028-0169bf57fae7" (UID: "04b34e34-d603-4df9-a028-0169bf57fae7"). InnerVolumeSpecName "kube-api-access-8p926". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.254916 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-scripts" (OuterVolumeSpecName: "scripts") pod "04b34e34-d603-4df9-a028-0169bf57fae7" (UID: "04b34e34-d603-4df9-a028-0169bf57fae7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.291077 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04b34e34-d603-4df9-a028-0169bf57fae7" (UID: "04b34e34-d603-4df9-a028-0169bf57fae7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.312613 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl9gh\" (UniqueName: \"kubernetes.io/projected/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-kube-api-access-bl9gh\") pod \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.313704 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-scripts\") pod \"dab6259a-5558-4bdc-9b99-1c2ba8778593\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.313895 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-httpd-run\") pod \"dab6259a-5558-4bdc-9b99-1c2ba8778593\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.313967 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-logs\") pod \"dab6259a-5558-4bdc-9b99-1c2ba8778593\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314056 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-sb\") pod \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314130 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-config-data\") pod \"dab6259a-5558-4bdc-9b99-1c2ba8778593\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314243 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-svc\") pod \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314334 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-swift-storage-0\") pod \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314415 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv599\" (UniqueName: \"kubernetes.io/projected/dab6259a-5558-4bdc-9b99-1c2ba8778593-kube-api-access-cv599\") pod \"dab6259a-5558-4bdc-9b99-1c2ba8778593\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314624 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"dab6259a-5558-4bdc-9b99-1c2ba8778593\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314720 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-combined-ca-bundle\") pod \"dab6259a-5558-4bdc-9b99-1c2ba8778593\" (UID: \"dab6259a-5558-4bdc-9b99-1c2ba8778593\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314853 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-config\") pod \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.314927 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-nb\") pod \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\" (UID: \"61f91d78-8f5e-46ec-9dc6-df98ae71f8ae\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.315647 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.315731 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.315792 4982 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.315850 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p926\" (UniqueName: \"kubernetes.io/projected/04b34e34-d603-4df9-a028-0169bf57fae7-kube-api-access-8p926\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.315920 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.324665 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dab6259a-5558-4bdc-9b99-1c2ba8778593" (UID: "dab6259a-5558-4bdc-9b99-1c2ba8778593"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.330992 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-logs" (OuterVolumeSpecName: "logs") pod "dab6259a-5558-4bdc-9b99-1c2ba8778593" (UID: "dab6259a-5558-4bdc-9b99-1c2ba8778593"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.340900 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-config-data" (OuterVolumeSpecName: "config-data") pod "04b34e34-d603-4df9-a028-0169bf57fae7" (UID: "04b34e34-d603-4df9-a028-0169bf57fae7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.342694 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-scripts" (OuterVolumeSpecName: "scripts") pod "dab6259a-5558-4bdc-9b99-1c2ba8778593" (UID: "dab6259a-5558-4bdc-9b99-1c2ba8778593"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.350809 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-kube-api-access-bl9gh" (OuterVolumeSpecName: "kube-api-access-bl9gh") pod "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" (UID: "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae"). InnerVolumeSpecName "kube-api-access-bl9gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.354846 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab6259a-5558-4bdc-9b99-1c2ba8778593-kube-api-access-cv599" (OuterVolumeSpecName: "kube-api-access-cv599") pod "dab6259a-5558-4bdc-9b99-1c2ba8778593" (UID: "dab6259a-5558-4bdc-9b99-1c2ba8778593"). InnerVolumeSpecName "kube-api-access-cv599". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.356206 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b" (OuterVolumeSpecName: "glance") pod "dab6259a-5558-4bdc-9b99-1c2ba8778593" (UID: "dab6259a-5558-4bdc-9b99-1c2ba8778593"). InnerVolumeSpecName "pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.374609 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wwkwb" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.381714 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cq29z" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.415890 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" (UID: "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.417868 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj2hj\" (UniqueName: \"kubernetes.io/projected/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-kube-api-access-vj2hj\") pod \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.417972 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-combined-ca-bundle\") pod \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.420541 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-db-sync-config-data\") pod \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\" (UID: \"2b24785f-aeec-4512-8f01-b0e1fc31a2b4\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.420607 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-combined-ca-bundle\") pod \"8d53de54-d42e-4095-95b0-df6db57c2106\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.420631 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-config-data\") pod \"8d53de54-d42e-4095-95b0-df6db57c2106\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.420762 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53de54-d42e-4095-95b0-df6db57c2106-logs\") pod \"8d53de54-d42e-4095-95b0-df6db57c2106\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.420808 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2l9r\" (UniqueName: \"kubernetes.io/projected/8d53de54-d42e-4095-95b0-df6db57c2106-kube-api-access-c2l9r\") pod \"8d53de54-d42e-4095-95b0-df6db57c2106\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.420879 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-scripts\") pod \"8d53de54-d42e-4095-95b0-df6db57c2106\" (UID: \"8d53de54-d42e-4095-95b0-df6db57c2106\") " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.421544 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.421557 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab6259a-5558-4bdc-9b99-1c2ba8778593-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.421566 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.421575 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv599\" (UniqueName: \"kubernetes.io/projected/dab6259a-5558-4bdc-9b99-1c2ba8778593-kube-api-access-cv599\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.421599 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") on node \"crc\" " Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.421610 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b34e34-d603-4df9-a028-0169bf57fae7-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.421621 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl9gh\" (UniqueName: \"kubernetes.io/projected/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-kube-api-access-bl9gh\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.421630 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.424332 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d53de54-d42e-4095-95b0-df6db57c2106-logs" (OuterVolumeSpecName: "logs") pod "8d53de54-d42e-4095-95b0-df6db57c2106" (UID: "8d53de54-d42e-4095-95b0-df6db57c2106"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.429227 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-kube-api-access-vj2hj" (OuterVolumeSpecName: "kube-api-access-vj2hj") pod "2b24785f-aeec-4512-8f01-b0e1fc31a2b4" (UID: "2b24785f-aeec-4512-8f01-b0e1fc31a2b4"). InnerVolumeSpecName "kube-api-access-vj2hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.432784 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-scripts" (OuterVolumeSpecName: "scripts") pod "8d53de54-d42e-4095-95b0-df6db57c2106" (UID: "8d53de54-d42e-4095-95b0-df6db57c2106"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.433573 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d53de54-d42e-4095-95b0-df6db57c2106-kube-api-access-c2l9r" (OuterVolumeSpecName: "kube-api-access-c2l9r") pod "8d53de54-d42e-4095-95b0-df6db57c2106" (UID: "8d53de54-d42e-4095-95b0-df6db57c2106"). InnerVolumeSpecName "kube-api-access-c2l9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.434553 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dab6259a-5558-4bdc-9b99-1c2ba8778593" (UID: "dab6259a-5558-4bdc-9b99-1c2ba8778593"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.434594 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" (UID: "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.436610 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2b24785f-aeec-4512-8f01-b0e1fc31a2b4" (UID: "2b24785f-aeec-4512-8f01-b0e1fc31a2b4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.447138 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" (UID: "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.452834 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" (UID: "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.467606 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-config-data" (OuterVolumeSpecName: "config-data") pod "8d53de54-d42e-4095-95b0-df6db57c2106" (UID: "8d53de54-d42e-4095-95b0-df6db57c2106"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.469183 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-config" (OuterVolumeSpecName: "config") pod "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" (UID: "61f91d78-8f5e-46ec-9dc6-df98ae71f8ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.470366 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b24785f-aeec-4512-8f01-b0e1fc31a2b4" (UID: "2b24785f-aeec-4512-8f01-b0e1fc31a2b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.472287 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-config-data" (OuterVolumeSpecName: "config-data") pod "dab6259a-5558-4bdc-9b99-1c2ba8778593" (UID: "dab6259a-5558-4bdc-9b99-1c2ba8778593"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.474593 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d53de54-d42e-4095-95b0-df6db57c2106" (UID: "8d53de54-d42e-4095-95b0-df6db57c2106"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.486917 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.487115 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b") on node "crc" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524169 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524207 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524217 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524226 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524235 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524243 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524253 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524264 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524276 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab6259a-5558-4bdc-9b99-1c2ba8778593-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524294 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53de54-d42e-4095-95b0-df6db57c2106-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524302 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2l9r\" (UniqueName: \"kubernetes.io/projected/8d53de54-d42e-4095-95b0-df6db57c2106-kube-api-access-c2l9r\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524312 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524319 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524328 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53de54-d42e-4095-95b0-df6db57c2106-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.524336 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj2hj\" (UniqueName: \"kubernetes.io/projected/2b24785f-aeec-4512-8f01-b0e1fc31a2b4-kube-api-access-vj2hj\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:43 crc kubenswrapper[4982]: W0224 15:13:43.682819 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f5da815_46e1_4224_bd34_feb1cdb54446.slice/crio-aa374e8489b93bcfe8474d40d27946b1c830ba8140485f2d9fbf40dfdfb1ca22 WatchSource:0}: Error finding container aa374e8489b93bcfe8474d40d27946b1c830ba8140485f2d9fbf40dfdfb1ca22: Status 404 returned error can't find the container with id aa374e8489b93bcfe8474d40d27946b1c830ba8140485f2d9fbf40dfdfb1ca22 Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.684362 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75468f4444-6cfm8"] Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.802616 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6r9d"] Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.880192 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65698f94df-gbcsr"] Feb 24 15:13:43 crc kubenswrapper[4982]: I0224 15:13:43.974029 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:13:43 crc kubenswrapper[4982]: W0224 15:13:43.983687 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac WatchSource:0}: Error finding container a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac: Status 404 returned error can't find the container with id a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.052124 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" podUID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.191:5353: i/o timeout" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.163169 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee0ec3ad-df19-4e19-a288-d6ca32779160","Type":"ContainerStarted","Data":"a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac"} Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.181104 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" event={"ID":"e83a7473-0ae0-4c93-b9cf-91600f3f0a66","Type":"ContainerStarted","Data":"9d01787f045a95c6ed5c2a1dc708e7d843d960a05d3983e4fa82d3587c9c6e60"} Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.188575 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75468f4444-6cfm8" event={"ID":"1f5da815-46e1-4224-bd34-feb1cdb54446","Type":"ContainerStarted","Data":"774b93cbf56825fca1510cd4e5cc4eb4c289eb871fa8d6a73eaaed170b3f6bc4"} Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.188608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75468f4444-6cfm8" event={"ID":"1f5da815-46e1-4224-bd34-feb1cdb54446","Type":"ContainerStarted","Data":"aa374e8489b93bcfe8474d40d27946b1c830ba8140485f2d9fbf40dfdfb1ca22"} Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.194153 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65698f94df-gbcsr" event={"ID":"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef","Type":"ContainerStarted","Data":"7db4ffa7bf8ab0607ccef8bd268a29baa64c1f31635af64d4b837541714d7c4c"} Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.197622 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cq29z" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.197896 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s55kw" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.198041 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1","Type":"ContainerStarted","Data":"6ea7ac2c9eaee9dbb02d9413c13603b37f716bd79cadd224b8a2999f2f737e5a"} Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.198569 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-t9v58" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.198649 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.200677 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wwkwb" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.385349 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-66dcfd46c4-xzlpx"] Feb 24 15:13:44 crc kubenswrapper[4982]: E0224 15:13:44.386523 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b24785f-aeec-4512-8f01-b0e1fc31a2b4" containerName="barbican-db-sync" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.386549 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b24785f-aeec-4512-8f01-b0e1fc31a2b4" containerName="barbican-db-sync" Feb 24 15:13:44 crc kubenswrapper[4982]: E0224 15:13:44.407297 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerName="init" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.407342 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerName="init" Feb 24 15:13:44 crc kubenswrapper[4982]: E0224 15:13:44.407370 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerName="dnsmasq-dns" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.407379 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerName="dnsmasq-dns" Feb 24 15:13:44 crc kubenswrapper[4982]: E0224 15:13:44.407404 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerName="glance-httpd" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.407411 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerName="glance-httpd" Feb 24 15:13:44 crc kubenswrapper[4982]: E0224 15:13:44.407433 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b34e34-d603-4df9-a028-0169bf57fae7" containerName="keystone-bootstrap" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.407440 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b34e34-d603-4df9-a028-0169bf57fae7" containerName="keystone-bootstrap" Feb 24 15:13:44 crc kubenswrapper[4982]: E0224 15:13:44.407472 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerName="glance-log" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.407482 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerName="glance-log" Feb 24 15:13:44 crc kubenswrapper[4982]: E0224 15:13:44.407524 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d53de54-d42e-4095-95b0-df6db57c2106" containerName="placement-db-sync" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.407533 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d53de54-d42e-4095-95b0-df6db57c2106" containerName="placement-db-sync" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.408118 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b24785f-aeec-4512-8f01-b0e1fc31a2b4" containerName="barbican-db-sync" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.408139 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerName="glance-log" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.408149 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" containerName="glance-httpd" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.408159 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d53de54-d42e-4095-95b0-df6db57c2106" containerName="placement-db-sync" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.408170 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" containerName="dnsmasq-dns" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.408190 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b34e34-d603-4df9-a028-0169bf57fae7" containerName="keystone-bootstrap" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.413745 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.420779 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.421884 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.427492 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8hsbr" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.427678 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.427774 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.429015 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.445761 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-internal-tls-certs\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.445806 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-fernet-keys\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.445834 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-config-data\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.446066 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-credential-keys\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.446172 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-scripts\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.446354 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzr69\" (UniqueName: \"kubernetes.io/projected/69f6a530-6cab-4af8-a122-2648213a4c8b-kube-api-access-vzr69\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.446440 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-public-tls-certs\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.446487 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-combined-ca-bundle\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.470635 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66dcfd46c4-xzlpx"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.530792 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-t9v58"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.548778 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-public-tls-certs\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.548847 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-combined-ca-bundle\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.548991 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-internal-tls-certs\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.549013 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-fernet-keys\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.549045 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-config-data\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.553228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-internal-tls-certs\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.554128 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-credential-keys\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.558942 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-scripts\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.559185 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzr69\" (UniqueName: \"kubernetes.io/projected/69f6a530-6cab-4af8-a122-2648213a4c8b-kube-api-access-vzr69\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.560466 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-fernet-keys\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.564005 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-combined-ca-bundle\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.568791 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-credential-keys\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.568913 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-public-tls-certs\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.569530 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-config-data\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.582187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69f6a530-6cab-4af8-a122-2648213a4c8b-scripts\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.586101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzr69\" (UniqueName: \"kubernetes.io/projected/69f6a530-6cab-4af8-a122-2648213a4c8b-kube-api-access-vzr69\") pod \"keystone-66dcfd46c4-xzlpx\" (UID: \"69f6a530-6cab-4af8-a122-2648213a4c8b\") " pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.607950 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-t9v58"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.657010 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.700556 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.801471 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.803384 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.806302 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.821404 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.837744 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.905598 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.945274 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-77797c9b78-rvhwv"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.950548 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.957228 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.957637 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.957824 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-n94lc" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.971181 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77797c9b78-rvhwv"] Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.980747 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.981130 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.981224 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.981303 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.981428 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.981558 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.981665 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-logs\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:44 crc kubenswrapper[4982]: I0224 15:13:44.981813 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddfm\" (UniqueName: \"kubernetes.io/projected/ed67ff00-778a-4253-8f39-52d8ecbcc41b-kube-api-access-pddfm\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.000819 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-77f5949dd7-rps9v"] Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.010611 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.019440 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77f5949dd7-rps9v"] Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.030767 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6r9d"] Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.036467 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.067568 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6557c4cbb4-n6w8z"] Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.069375 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.086193 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-config-data\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.086281 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7334fe93-50f7-486c-a11a-1cac15b026da-logs\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.086309 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-combined-ca-bundle\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.086376 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.086446 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xjk\" (UniqueName: \"kubernetes.io/projected/7334fe93-50f7-486c-a11a-1cac15b026da-kube-api-access-28xjk\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.086481 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-combined-ca-bundle\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.086532 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.087630 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-logs\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.087747 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-config-data-custom\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.087777 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-config-data\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.087835 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-config-data-custom\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.087866 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pddfm\" (UniqueName: \"kubernetes.io/projected/ed67ff00-778a-4253-8f39-52d8ecbcc41b-kube-api-access-pddfm\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.087938 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.087968 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnx98\" (UniqueName: \"kubernetes.io/projected/17fd2f10-64fe-4299-9f45-b81e02687f53-kube-api-access-qnx98\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.087988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17fd2f10-64fe-4299-9f45-b81e02687f53-logs\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.088004 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.088038 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.088060 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.090050 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.093595 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-logs\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.101440 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.103016 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.106663 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.107017 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.107173 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.107322 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vmm4w" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.114884 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.120523 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.123632 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.123674 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e56090e32bcbf36b2e5fe0665116dc217294b29490f5f5ced3111bea03953390/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.133227 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.166428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pddfm\" (UniqueName: \"kubernetes.io/projected/ed67ff00-778a-4253-8f39-52d8ecbcc41b-kube-api-access-pddfm\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197235 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-public-tls-certs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197282 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-internal-tls-certs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197318 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-config-data-custom\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197341 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-config-data\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197369 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-config-data\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197390 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-config-data-custom\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197640 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnx98\" (UniqueName: \"kubernetes.io/projected/17fd2f10-64fe-4299-9f45-b81e02687f53-kube-api-access-qnx98\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197659 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17fd2f10-64fe-4299-9f45-b81e02687f53-logs\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197753 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-combined-ca-bundle\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197844 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-config-data\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197892 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6846\" (UniqueName: \"kubernetes.io/projected/26e32ece-d925-472d-91cd-db19b7bbb9ed-kube-api-access-f6846\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197927 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7334fe93-50f7-486c-a11a-1cac15b026da-logs\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.197942 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-combined-ca-bundle\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.198003 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-scripts\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.198076 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e32ece-d925-472d-91cd-db19b7bbb9ed-logs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.198143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28xjk\" (UniqueName: \"kubernetes.io/projected/7334fe93-50f7-486c-a11a-1cac15b026da-kube-api-access-28xjk\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.198178 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-combined-ca-bundle\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.198290 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17fd2f10-64fe-4299-9f45-b81e02687f53-logs\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.200131 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7334fe93-50f7-486c-a11a-1cac15b026da-logs\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.233776 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-config-data\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.238973 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-config-data\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.239162 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-combined-ca-bundle\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.239565 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-combined-ca-bundle\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.239742 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17fd2f10-64fe-4299-9f45-b81e02687f53-config-data-custom\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.249537 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7334fe93-50f7-486c-a11a-1cac15b026da-config-data-custom\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.295461 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28xjk\" (UniqueName: \"kubernetes.io/projected/7334fe93-50f7-486c-a11a-1cac15b026da-kube-api-access-28xjk\") pod \"barbican-worker-77f5949dd7-rps9v\" (UID: \"7334fe93-50f7-486c-a11a-1cac15b026da\") " pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.304546 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-config-data\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.305116 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-combined-ca-bundle\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.305360 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6846\" (UniqueName: \"kubernetes.io/projected/26e32ece-d925-472d-91cd-db19b7bbb9ed-kube-api-access-f6846\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.305523 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-scripts\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.305634 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e32ece-d925-472d-91cd-db19b7bbb9ed-logs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.305808 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-public-tls-certs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.305898 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-internal-tls-certs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.320327 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e32ece-d925-472d-91cd-db19b7bbb9ed-logs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.321406 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f91d78-8f5e-46ec-9dc6-df98ae71f8ae" path="/var/lib/kubelet/pods/61f91d78-8f5e-46ec-9dc6-df98ae71f8ae/volumes" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.336010 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-combined-ca-bundle\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.379612 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab6259a-5558-4bdc-9b99-1c2ba8778593" path="/var/lib/kubelet/pods/dab6259a-5558-4bdc-9b99-1c2ba8778593/volumes" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.381220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9zp9n" event={"ID":"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99","Type":"ContainerDied","Data":"814b1d17da64301b2dde6db43399f8d2ac9a66c9dcfd37022f3a6043137e6a6b"} Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.381269 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="814b1d17da64301b2dde6db43399f8d2ac9a66c9dcfd37022f3a6043137e6a6b" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.381285 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-w4r8q"] Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.382172 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnx98\" (UniqueName: \"kubernetes.io/projected/17fd2f10-64fe-4299-9f45-b81e02687f53-kube-api-access-qnx98\") pod \"barbican-keystone-listener-77797c9b78-rvhwv\" (UID: \"17fd2f10-64fe-4299-9f45-b81e02687f53\") " pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.459327 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-public-tls-certs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.460041 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-internal-tls-certs\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.460491 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6846\" (UniqueName: \"kubernetes.io/projected/26e32ece-d925-472d-91cd-db19b7bbb9ed-kube-api-access-f6846\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.461049 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-scripts\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.461148 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e32ece-d925-472d-91cd-db19b7bbb9ed-config-data\") pod \"placement-6557c4cbb4-n6w8z\" (UID: \"26e32ece-d925-472d-91cd-db19b7bbb9ed\") " pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.481710 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6557c4cbb4-n6w8z"] Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.481780 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-w4r8q"] Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.481887 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.485046 4982 generic.go:334] "Generic (PLEG): container finished" podID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" containerID="162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158" exitCode=0 Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.485146 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" event={"ID":"e83a7473-0ae0-4c93-b9cf-91600f3f0a66","Type":"ContainerDied","Data":"162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158"} Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.489450 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.530632 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75468f4444-6cfm8" event={"ID":"1f5da815-46e1-4224-bd34-feb1cdb54446","Type":"ContainerStarted","Data":"d2c03da9315f1b9421e864c1c0d4dad38478cc071f11beb58b7d41896ee75bdf"} Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.530683 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77f5949dd7-rps9v" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.540806 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.532491 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9zp9n" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.574065 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-combined-ca-bundle\") pod \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.581433 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.582459 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.582632 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.582670 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-config\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.582701 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.584229 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnn5b\" (UniqueName: \"kubernetes.io/projected/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-kube-api-access-gnn5b\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.596792 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78bc8447b6-wqsr4"] Feb 24 15:13:45 crc kubenswrapper[4982]: E0224 15:13:45.618425 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" containerName="heat-db-sync" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.618458 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" containerName="heat-db-sync" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.620210 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" containerName="heat-db-sync" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.622227 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.631583 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65698f94df-gbcsr" event={"ID":"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef","Type":"ContainerStarted","Data":"b9ffcb63eb862dc15ad4ec83fbdbd8580f2dcffded18d222cce0c73c8e5907cc"} Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.632120 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65698f94df-gbcsr" event={"ID":"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef","Type":"ContainerStarted","Data":"70082dd99b1e75ae290840b2beecb2c0623428c56eb38e096e8ef81392a0a651"} Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.652876 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.671409 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" (UID: "2295dbe7-ace5-48d8-8952-fd4c3b5ddf99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.722787 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrzs\" (UniqueName: \"kubernetes.io/projected/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-kube-api-access-9nrzs\") pod \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.722880 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-config-data\") pod \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\" (UID: \"2295dbe7-ace5-48d8-8952-fd4c3b5ddf99\") " Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.723453 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnn5b\" (UniqueName: \"kubernetes.io/projected/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-kube-api-access-gnn5b\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.723595 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.723646 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.723693 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.723715 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-config\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.723737 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.723807 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.724973 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.735022 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.736035 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.736597 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.737094 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-config\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.757017 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78bc8447b6-wqsr4"] Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.759024 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-kube-api-access-9nrzs" (OuterVolumeSpecName: "kube-api-access-9nrzs") pod "2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" (UID: "2295dbe7-ace5-48d8-8952-fd4c3b5ddf99"). InnerVolumeSpecName "kube-api-access-9nrzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.776972 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.791118 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " pod="openstack/glance-default-external-api-0" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.796405 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnn5b\" (UniqueName: \"kubernetes.io/projected/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-kube-api-access-gnn5b\") pod \"dnsmasq-dns-848cf88cfc-w4r8q\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.827678 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data-custom\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.828681 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.828755 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-combined-ca-bundle\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.828965 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ab5470-a7ff-4276-86f0-0fb046f32b03-logs\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.829026 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gljh6\" (UniqueName: \"kubernetes.io/projected/08ab5470-a7ff-4276-86f0-0fb046f32b03-kube-api-access-gljh6\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.829304 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrzs\" (UniqueName: \"kubernetes.io/projected/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-kube-api-access-9nrzs\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.922205 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75468f4444-6cfm8" podStartSLOduration=9.922183844 podStartE2EDuration="9.922183844s" podCreationTimestamp="2026-02-24 15:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:45.681292234 +0000 UTC m=+1487.300350737" watchObservedRunningTime="2026-02-24 15:13:45.922183844 +0000 UTC m=+1487.541242337" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.931954 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.932036 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-combined-ca-bundle\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.932107 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ab5470-a7ff-4276-86f0-0fb046f32b03-logs\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.932159 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gljh6\" (UniqueName: \"kubernetes.io/projected/08ab5470-a7ff-4276-86f0-0fb046f32b03-kube-api-access-gljh6\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.932294 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data-custom\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.936083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ab5470-a7ff-4276-86f0-0fb046f32b03-logs\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.951127 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data-custom\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.973699 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:45 crc kubenswrapper[4982]: I0224 15:13:45.995844 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-65698f94df-gbcsr" podStartSLOduration=7.995824696 podStartE2EDuration="7.995824696s" podCreationTimestamp="2026-02-24 15:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:45.776100782 +0000 UTC m=+1487.395159285" watchObservedRunningTime="2026-02-24 15:13:45.995824696 +0000 UTC m=+1487.614883189" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.001783 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-combined-ca-bundle\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.010224 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gljh6\" (UniqueName: \"kubernetes.io/projected/08ab5470-a7ff-4276-86f0-0fb046f32b03-kube-api-access-gljh6\") pod \"barbican-api-78bc8447b6-wqsr4\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.023629 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66dcfd46c4-xzlpx"] Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.079163 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.091434 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.111355 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.503625 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-config-data" (OuterVolumeSpecName: "config-data") pod "2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" (UID: "2295dbe7-ace5-48d8-8952-fd4c3b5ddf99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.556635 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77f5949dd7-rps9v"] Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.557973 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.570586 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77797c9b78-rvhwv"] Feb 24 15:13:46 crc kubenswrapper[4982]: W0224 15:13:46.585084 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17fd2f10_64fe_4299_9f45_b81e02687f53.slice/crio-630055407ca1a3d528465f9c45b3830c2217340b365463ce4db60bcd31ac2dbb WatchSource:0}: Error finding container 630055407ca1a3d528465f9c45b3830c2217340b365463ce4db60bcd31ac2dbb: Status 404 returned error can't find the container with id 630055407ca1a3d528465f9c45b3830c2217340b365463ce4db60bcd31ac2dbb Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.693452 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" event={"ID":"17fd2f10-64fe-4299-9f45-b81e02687f53","Type":"ContainerStarted","Data":"630055407ca1a3d528465f9c45b3830c2217340b365463ce4db60bcd31ac2dbb"} Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.697241 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee0ec3ad-df19-4e19-a288-d6ca32779160","Type":"ContainerStarted","Data":"06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d"} Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.722034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77f5949dd7-rps9v" event={"ID":"7334fe93-50f7-486c-a11a-1cac15b026da","Type":"ContainerStarted","Data":"415dca0bf892c498a5c1f5b69fdd523839c6c578e6e1fede6ef0bd7166e8d90a"} Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.723728 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9zp9n" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.727812 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66dcfd46c4-xzlpx" event={"ID":"69f6a530-6cab-4af8-a122-2648213a4c8b","Type":"ContainerStarted","Data":"ba686f218d73197eefea383d0bfe46abe83b70ddb027552fa036ee809d57a334"} Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.728224 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:13:46 crc kubenswrapper[4982]: I0224 15:13:46.756324 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6557c4cbb4-n6w8z"] Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.134711 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.233380 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78bc8447b6-wqsr4"] Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.233421 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-w4r8q"] Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.751529 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" event={"ID":"e83a7473-0ae0-4c93-b9cf-91600f3f0a66","Type":"ContainerStarted","Data":"e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7"} Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.751943 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" podUID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" containerName="dnsmasq-dns" containerID="cri-o://e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7" gracePeriod=10 Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.752021 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.776600 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6557c4cbb4-n6w8z" event={"ID":"26e32ece-d925-472d-91cd-db19b7bbb9ed","Type":"ContainerStarted","Data":"755e95281987c4935e7caf4c2e7d0272ff0029218109615203d3d01e1c11c216"} Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.786735 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66dcfd46c4-xzlpx" event={"ID":"69f6a530-6cab-4af8-a122-2648213a4c8b","Type":"ContainerStarted","Data":"aa4ec9e79fb3f1667d578a99667c30e724e644eef52fe8e02aeddf718f1de28b"} Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.787284 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.794751 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" podStartSLOduration=12.794727939 podStartE2EDuration="12.794727939s" podCreationTimestamp="2026-02-24 15:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:47.771283632 +0000 UTC m=+1489.390342135" watchObservedRunningTime="2026-02-24 15:13:47.794727939 +0000 UTC m=+1489.413786432" Feb 24 15:13:47 crc kubenswrapper[4982]: I0224 15:13:47.819578 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-66dcfd46c4-xzlpx" podStartSLOduration=3.819547573 podStartE2EDuration="3.819547573s" podCreationTimestamp="2026-02-24 15:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:47.801941505 +0000 UTC m=+1489.420999998" watchObservedRunningTime="2026-02-24 15:13:47.819547573 +0000 UTC m=+1489.438606066" Feb 24 15:13:47 crc kubenswrapper[4982]: W0224 15:13:47.894478 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08ab5470_a7ff_4276_86f0_0fb046f32b03.slice/crio-f6e792a36ffff0a4c915288c79cc26b0b62b76f4c00ea99e971f70f336f2ed71 WatchSource:0}: Error finding container f6e792a36ffff0a4c915288c79cc26b0b62b76f4c00ea99e971f70f336f2ed71: Status 404 returned error can't find the container with id f6e792a36ffff0a4c915288c79cc26b0b62b76f4c00ea99e971f70f336f2ed71 Feb 24 15:13:47 crc kubenswrapper[4982]: W0224 15:13:47.897087 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d WatchSource:0}: Error finding container 0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d: Status 404 returned error can't find the container with id 0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d Feb 24 15:13:47 crc kubenswrapper[4982]: W0224 15:13:47.904435 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod292f4788_1eb7_4407_9bb5_3fa8ca1d5702.slice/crio-a9d12751e16015b382d3cb81ee59447281aaeab02b435e897eaa2d7b95f1e8c2 WatchSource:0}: Error finding container a9d12751e16015b382d3cb81ee59447281aaeab02b435e897eaa2d7b95f1e8c2: Status 404 returned error can't find the container with id a9d12751e16015b382d3cb81ee59447281aaeab02b435e897eaa2d7b95f1e8c2 Feb 24 15:13:48 crc kubenswrapper[4982]: E0224 15:13:48.160955 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode83a7473_0ae0_4c93_b9cf_91600f3f0a66.slice/crio-conmon-e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.673280 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.802441 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-sb\") pod \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.802857 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zzbp\" (UniqueName: \"kubernetes.io/projected/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-kube-api-access-6zzbp\") pod \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.803001 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-config\") pod \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.803166 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-svc\") pod \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.803297 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-nb\") pod \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.803415 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-swift-storage-0\") pod \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\" (UID: \"e83a7473-0ae0-4c93-b9cf-91600f3f0a66\") " Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.808688 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-kube-api-access-6zzbp" (OuterVolumeSpecName: "kube-api-access-6zzbp") pod "e83a7473-0ae0-4c93-b9cf-91600f3f0a66" (UID: "e83a7473-0ae0-4c93-b9cf-91600f3f0a66"). InnerVolumeSpecName "kube-api-access-6zzbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.834658 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78bc8447b6-wqsr4" event={"ID":"08ab5470-a7ff-4276-86f0-0fb046f32b03","Type":"ContainerStarted","Data":"b6896e14f215ce0653b4a1acfa503455ce4c8d01e117663140c06e0178677844"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.834705 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78bc8447b6-wqsr4" event={"ID":"08ab5470-a7ff-4276-86f0-0fb046f32b03","Type":"ContainerStarted","Data":"f6e792a36ffff0a4c915288c79cc26b0b62b76f4c00ea99e971f70f336f2ed71"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.842316 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed67ff00-778a-4253-8f39-52d8ecbcc41b","Type":"ContainerStarted","Data":"0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.851592 4982 generic.go:334] "Generic (PLEG): container finished" podID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" containerID="ede7f849e484ea214e080e1331ef5f6284fb9e4982522d86ca8c23337d86157a" exitCode=0 Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.851683 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" event={"ID":"292f4788-1eb7-4407-9bb5-3fa8ca1d5702","Type":"ContainerDied","Data":"ede7f849e484ea214e080e1331ef5f6284fb9e4982522d86ca8c23337d86157a"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.851709 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" event={"ID":"292f4788-1eb7-4407-9bb5-3fa8ca1d5702","Type":"ContainerStarted","Data":"a9d12751e16015b382d3cb81ee59447281aaeab02b435e897eaa2d7b95f1e8c2"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.865158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee0ec3ad-df19-4e19-a288-d6ca32779160","Type":"ContainerStarted","Data":"8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.874340 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e83a7473-0ae0-4c93-b9cf-91600f3f0a66" (UID: "e83a7473-0ae0-4c93-b9cf-91600f3f0a66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.887941 4982 generic.go:334] "Generic (PLEG): container finished" podID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" containerID="e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7" exitCode=0 Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.888109 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.888698 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" event={"ID":"e83a7473-0ae0-4c93-b9cf-91600f3f0a66","Type":"ContainerDied","Data":"e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.888736 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6r9d" event={"ID":"e83a7473-0ae0-4c93-b9cf-91600f3f0a66","Type":"ContainerDied","Data":"9d01787f045a95c6ed5c2a1dc708e7d843d960a05d3983e4fa82d3587c9c6e60"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.888756 4982 scope.go:117] "RemoveContainer" containerID="e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.890323 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e83a7473-0ae0-4c93-b9cf-91600f3f0a66" (UID: "e83a7473-0ae0-4c93-b9cf-91600f3f0a66"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.897217 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-config" (OuterVolumeSpecName: "config") pod "e83a7473-0ae0-4c93-b9cf-91600f3f0a66" (UID: "e83a7473-0ae0-4c93-b9cf-91600f3f0a66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.899796 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6557c4cbb4-n6w8z" event={"ID":"26e32ece-d925-472d-91cd-db19b7bbb9ed","Type":"ContainerStarted","Data":"12c2491760672c26ef43229ea84e52e3773fa012e4f8c4b2562a285133b18644"} Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.915359 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.915394 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.915405 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.915458 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zzbp\" (UniqueName: \"kubernetes.io/projected/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-kube-api-access-6zzbp\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.920913 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e83a7473-0ae0-4c93-b9cf-91600f3f0a66" (UID: "e83a7473-0ae0-4c93-b9cf-91600f3f0a66"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.926535 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.926490853 podStartE2EDuration="9.926490853s" podCreationTimestamp="2026-02-24 15:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:48.90875657 +0000 UTC m=+1490.527815073" watchObservedRunningTime="2026-02-24 15:13:48.926490853 +0000 UTC m=+1490.545549346" Feb 24 15:13:48 crc kubenswrapper[4982]: I0224 15:13:48.975075 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e83a7473-0ae0-4c93-b9cf-91600f3f0a66" (UID: "e83a7473-0ae0-4c93-b9cf-91600f3f0a66"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.018103 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.018127 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e83a7473-0ae0-4c93-b9cf-91600f3f0a66-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.226694 4982 scope.go:117] "RemoveContainer" containerID="162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.316534 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5466bd89fd-vjk4t"] Feb 24 15:13:49 crc kubenswrapper[4982]: E0224 15:13:49.316961 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" containerName="dnsmasq-dns" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.316972 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" containerName="dnsmasq-dns" Feb 24 15:13:49 crc kubenswrapper[4982]: E0224 15:13:49.316990 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" containerName="init" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.316996 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" containerName="init" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.317190 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" containerName="dnsmasq-dns" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.359075 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.366894 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.368827 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.369256 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5466bd89fd-vjk4t"] Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.409017 4982 scope.go:117] "RemoveContainer" containerID="e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7" Feb 24 15:13:49 crc kubenswrapper[4982]: E0224 15:13:49.419926 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7\": container with ID starting with e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7 not found: ID does not exist" containerID="e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.420046 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7"} err="failed to get container status \"e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7\": rpc error: code = NotFound desc = could not find container \"e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7\": container with ID starting with e2e374a83e8d86a03734291ac5eb02b7e575c28d309c70b2cc188fbb86ea8cc7 not found: ID does not exist" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.420092 4982 scope.go:117] "RemoveContainer" containerID="162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158" Feb 24 15:13:49 crc kubenswrapper[4982]: E0224 15:13:49.421790 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158\": container with ID starting with 162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158 not found: ID does not exist" containerID="162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.421834 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158"} err="failed to get container status \"162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158\": rpc error: code = NotFound desc = could not find container \"162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158\": container with ID starting with 162fd792e1e19ef519278017a8ed85b3ebf280eed631e7660fb5d588609b3158 not found: ID does not exist" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.578038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-combined-ca-bundle\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.578176 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-config-data-custom\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.578233 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ldq\" (UniqueName: \"kubernetes.io/projected/ba8a29e9-d071-470e-80eb-8c749b582614-kube-api-access-z8ldq\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.578297 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-public-tls-certs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.578683 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-internal-tls-certs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.578874 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba8a29e9-d071-470e-80eb-8c749b582614-logs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.578931 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-config-data\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.582393 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6r9d"] Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.615156 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6r9d"] Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.682388 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-combined-ca-bundle\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.682457 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-config-data-custom\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.682489 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ldq\" (UniqueName: \"kubernetes.io/projected/ba8a29e9-d071-470e-80eb-8c749b582614-kube-api-access-z8ldq\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.682537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-public-tls-certs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.682571 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-internal-tls-certs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.682656 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba8a29e9-d071-470e-80eb-8c749b582614-logs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.682688 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-config-data\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.684792 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba8a29e9-d071-470e-80eb-8c749b582614-logs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.690581 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-combined-ca-bundle\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.691463 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-public-tls-certs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.691871 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-config-data\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.695676 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-internal-tls-certs\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.698163 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba8a29e9-d071-470e-80eb-8c749b582614-config-data-custom\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.706904 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ldq\" (UniqueName: \"kubernetes.io/projected/ba8a29e9-d071-470e-80eb-8c749b582614-kube-api-access-z8ldq\") pod \"barbican-api-5466bd89fd-vjk4t\" (UID: \"ba8a29e9-d071-470e-80eb-8c749b582614\") " pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.875240 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.918364 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6557c4cbb4-n6w8z" event={"ID":"26e32ece-d925-472d-91cd-db19b7bbb9ed","Type":"ContainerStarted","Data":"0885b22a599c06fb9cd14603572273bc3cbb167e5dcc8cf50a093778df4aab63"} Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.919768 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.919891 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.941803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78bc8447b6-wqsr4" event={"ID":"08ab5470-a7ff-4276-86f0-0fb046f32b03","Type":"ContainerStarted","Data":"8c1d16c9e4dea6ffd09e69a8010ba0cfa359df15d24558375d3d5feb0da4b9bb"} Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.943033 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.943087 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.956195 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed67ff00-778a-4253-8f39-52d8ecbcc41b","Type":"ContainerStarted","Data":"8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0"} Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.972975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" event={"ID":"292f4788-1eb7-4407-9bb5-3fa8ca1d5702","Type":"ContainerStarted","Data":"cbd99ad069a2ae3b7fd17f3ad5d50d2455211028049fe8c3bf269906392789bd"} Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.973954 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:49 crc kubenswrapper[4982]: I0224 15:13:49.998864 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6557c4cbb4-n6w8z" podStartSLOduration=5.998837649 podStartE2EDuration="5.998837649s" podCreationTimestamp="2026-02-24 15:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:49.943570943 +0000 UTC m=+1491.562629456" watchObservedRunningTime="2026-02-24 15:13:49.998837649 +0000 UTC m=+1491.617896142" Feb 24 15:13:50 crc kubenswrapper[4982]: I0224 15:13:50.026459 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78bc8447b6-wqsr4" podStartSLOduration=5.026434771 podStartE2EDuration="5.026434771s" podCreationTimestamp="2026-02-24 15:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:49.98932709 +0000 UTC m=+1491.608385593" watchObservedRunningTime="2026-02-24 15:13:50.026434771 +0000 UTC m=+1491.645493264" Feb 24 15:13:50 crc kubenswrapper[4982]: I0224 15:13:50.041707 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:50 crc kubenswrapper[4982]: I0224 15:13:50.041746 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:50 crc kubenswrapper[4982]: I0224 15:13:50.058207 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" podStartSLOduration=6.058180785 podStartE2EDuration="6.058180785s" podCreationTimestamp="2026-02-24 15:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:50.041453929 +0000 UTC m=+1491.660512432" watchObservedRunningTime="2026-02-24 15:13:50.058180785 +0000 UTC m=+1491.677239278" Feb 24 15:13:50 crc kubenswrapper[4982]: I0224 15:13:50.219031 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:50 crc kubenswrapper[4982]: I0224 15:13:50.236384 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:51 crc kubenswrapper[4982]: I0224 15:13:51.009809 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7dmfp" event={"ID":"9d6a585f-95fd-47f7-a809-cceee8a3644a","Type":"ContainerStarted","Data":"c17b59326500174f6a2d0f9566794193490e9a65eb23870cbc37de0748469c6f"} Feb 24 15:13:51 crc kubenswrapper[4982]: I0224 15:13:51.010261 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:51 crc kubenswrapper[4982]: I0224 15:13:51.010445 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 24 15:13:51 crc kubenswrapper[4982]: I0224 15:13:51.057589 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7dmfp" podStartSLOduration=5.341915379 podStartE2EDuration="53.057566223s" podCreationTimestamp="2026-02-24 15:12:58 +0000 UTC" firstStartedPulling="2026-02-24 15:13:00.207175828 +0000 UTC m=+1441.826234321" lastFinishedPulling="2026-02-24 15:13:47.922826672 +0000 UTC m=+1489.541885165" observedRunningTime="2026-02-24 15:13:51.050296445 +0000 UTC m=+1492.669354938" watchObservedRunningTime="2026-02-24 15:13:51.057566223 +0000 UTC m=+1492.676624716" Feb 24 15:13:51 crc kubenswrapper[4982]: I0224 15:13:51.186134 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e83a7473-0ae0-4c93-b9cf-91600f3f0a66" path="/var/lib/kubelet/pods/e83a7473-0ae0-4c93-b9cf-91600f3f0a66/volumes" Feb 24 15:13:51 crc kubenswrapper[4982]: I0224 15:13:51.186790 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5466bd89fd-vjk4t"] Feb 24 15:13:52 crc kubenswrapper[4982]: I0224 15:13:52.030897 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed67ff00-778a-4253-8f39-52d8ecbcc41b","Type":"ContainerStarted","Data":"49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9"} Feb 24 15:13:52 crc kubenswrapper[4982]: I0224 15:13:52.095489 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.09546807 podStartE2EDuration="8.09546807s" podCreationTimestamp="2026-02-24 15:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:13:52.061030063 +0000 UTC m=+1493.680088566" watchObservedRunningTime="2026-02-24 15:13:52.09546807 +0000 UTC m=+1493.714526563" Feb 24 15:13:53 crc kubenswrapper[4982]: I0224 15:13:53.045187 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 15:13:53 crc kubenswrapper[4982]: W0224 15:13:53.140346 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba8a29e9_d071_470e_80eb_8c749b582614.slice/crio-6260896a976c8a2e035480a913663ff1bf76e3ae641dcbb49867aa8a2cfd5127 WatchSource:0}: Error finding container 6260896a976c8a2e035480a913663ff1bf76e3ae641dcbb49867aa8a2cfd5127: Status 404 returned error can't find the container with id 6260896a976c8a2e035480a913663ff1bf76e3ae641dcbb49867aa8a2cfd5127 Feb 24 15:13:54 crc kubenswrapper[4982]: I0224 15:13:54.065076 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5466bd89fd-vjk4t" event={"ID":"ba8a29e9-d071-470e-80eb-8c749b582614","Type":"ContainerStarted","Data":"6260896a976c8a2e035480a913663ff1bf76e3ae641dcbb49867aa8a2cfd5127"} Feb 24 15:13:56 crc kubenswrapper[4982]: I0224 15:13:56.080378 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 24 15:13:56 crc kubenswrapper[4982]: I0224 15:13:56.080738 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 24 15:13:56 crc kubenswrapper[4982]: I0224 15:13:56.093951 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:13:56 crc kubenswrapper[4982]: I0224 15:13:56.190119 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-k7ffj"] Feb 24 15:13:56 crc kubenswrapper[4982]: I0224 15:13:56.190400 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerName="dnsmasq-dns" containerID="cri-o://4929189ce037909806b050c77005b595c67eabda9288f1c7eecdffff5de02703" gracePeriod=10 Feb 24 15:13:56 crc kubenswrapper[4982]: I0224 15:13:56.439662 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 24 15:13:56 crc kubenswrapper[4982]: I0224 15:13:56.445863 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 24 15:13:57 crc kubenswrapper[4982]: I0224 15:13:57.118787 4982 generic.go:334] "Generic (PLEG): container finished" podID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerID="4929189ce037909806b050c77005b595c67eabda9288f1c7eecdffff5de02703" exitCode=0 Feb 24 15:13:57 crc kubenswrapper[4982]: I0224 15:13:57.118924 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" event={"ID":"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582","Type":"ContainerDied","Data":"4929189ce037909806b050c77005b595c67eabda9288f1c7eecdffff5de02703"} Feb 24 15:13:57 crc kubenswrapper[4982]: I0224 15:13:57.119240 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 24 15:13:57 crc kubenswrapper[4982]: I0224 15:13:57.119300 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 24 15:13:59 crc kubenswrapper[4982]: I0224 15:13:59.247692 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.188:5353: connect: connection refused" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.145368 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532434-bt9vj"] Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.147399 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.149862 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.152034 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.152293 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.160177 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532434-bt9vj"] Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.194725 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78bc8447b6-wqsr4" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.204:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.195039 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-78bc8447b6-wqsr4" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.204:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.286372 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thp6\" (UniqueName: \"kubernetes.io/projected/a9f6a4ae-f033-46bd-8f3b-5c625216a825-kube-api-access-2thp6\") pod \"auto-csr-approver-29532434-bt9vj\" (UID: \"a9f6a4ae-f033-46bd-8f3b-5c625216a825\") " pod="openshift-infra/auto-csr-approver-29532434-bt9vj" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.388801 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2thp6\" (UniqueName: \"kubernetes.io/projected/a9f6a4ae-f033-46bd-8f3b-5c625216a825-kube-api-access-2thp6\") pod \"auto-csr-approver-29532434-bt9vj\" (UID: \"a9f6a4ae-f033-46bd-8f3b-5c625216a825\") " pod="openshift-infra/auto-csr-approver-29532434-bt9vj" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.408419 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thp6\" (UniqueName: \"kubernetes.io/projected/a9f6a4ae-f033-46bd-8f3b-5c625216a825-kube-api-access-2thp6\") pod \"auto-csr-approver-29532434-bt9vj\" (UID: \"a9f6a4ae-f033-46bd-8f3b-5c625216a825\") " pod="openshift-infra/auto-csr-approver-29532434-bt9vj" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.466861 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.517155 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78bc8447b6-wqsr4" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 15:14:00 crc kubenswrapper[4982]: I0224 15:14:00.763527 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:14:02 crc kubenswrapper[4982]: I0224 15:14:02.330528 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:14:02 crc kubenswrapper[4982]: I0224 15:14:02.855985 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 24 15:14:02 crc kubenswrapper[4982]: I0224 15:14:02.856578 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 15:14:02 crc kubenswrapper[4982]: I0224 15:14:02.867033 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 24 15:14:02 crc kubenswrapper[4982]: I0224 15:14:02.867152 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 15:14:02 crc kubenswrapper[4982]: I0224 15:14:02.867709 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 24 15:14:02 crc kubenswrapper[4982]: I0224 15:14:02.945564 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 24 15:14:04 crc kubenswrapper[4982]: I0224 15:14:04.247212 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.188:5353: connect: connection refused" Feb 24 15:14:04 crc kubenswrapper[4982]: E0224 15:14:04.885314 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 24 15:14:04 crc kubenswrapper[4982]: E0224 15:14:04.885603 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfwfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 15:14:04 crc kubenswrapper[4982]: E0224 15:14:04.887563 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.275862 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerName="sg-core" containerID="cri-o://6ea7ac2c9eaee9dbb02d9413c13603b37f716bd79cadd224b8a2999f2f737e5a" gracePeriod=30 Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.286467 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerName="ceilometer-notification-agent" containerID="cri-o://989a0da191dd8d7d31472c1820c66f5063e71ee1d394f41d90252bd2dd9703ac" gracePeriod=30 Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.460150 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.575861 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-svc\") pod \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.576213 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-nb\") pod \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.576338 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-config\") pod \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.576623 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-sb\") pod \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.576656 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xscwx\" (UniqueName: \"kubernetes.io/projected/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-kube-api-access-xscwx\") pod \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.576696 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-swift-storage-0\") pod \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\" (UID: \"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582\") " Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.581948 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-kube-api-access-xscwx" (OuterVolumeSpecName: "kube-api-access-xscwx") pod "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" (UID: "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582"). InnerVolumeSpecName "kube-api-access-xscwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.680002 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xscwx\" (UniqueName: \"kubernetes.io/projected/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-kube-api-access-xscwx\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.698268 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532434-bt9vj"] Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.719480 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" (UID: "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.719534 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" (UID: "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.720743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" (UID: "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:05 crc kubenswrapper[4982]: W0224 15:14:05.723339 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9f6a4ae_f033_46bd_8f3b_5c625216a825.slice/crio-3d844cf5bb7e95ce37eeb926eb8f1bf1395e1056b2c5d40aa6bfc060a446979a WatchSource:0}: Error finding container 3d844cf5bb7e95ce37eeb926eb8f1bf1395e1056b2c5d40aa6bfc060a446979a: Status 404 returned error can't find the container with id 3d844cf5bb7e95ce37eeb926eb8f1bf1395e1056b2c5d40aa6bfc060a446979a Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.739078 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-config" (OuterVolumeSpecName: "config") pod "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" (UID: "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.740640 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" (UID: "ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.784069 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.784100 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.784114 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.784123 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:05 crc kubenswrapper[4982]: I0224 15:14:05.784147 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.288423 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77f5949dd7-rps9v" event={"ID":"7334fe93-50f7-486c-a11a-1cac15b026da","Type":"ContainerStarted","Data":"fd52ab0770c798c580ab52ab26d0910dcaa25800045ddae59946a9a46d2b09c6"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.288464 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77f5949dd7-rps9v" event={"ID":"7334fe93-50f7-486c-a11a-1cac15b026da","Type":"ContainerStarted","Data":"9c157e0e9a1684b578e3bb757071e9329ff36abc9afc648ee4d14618aed21b7c"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.292522 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" event={"ID":"a9f6a4ae-f033-46bd-8f3b-5c625216a825","Type":"ContainerStarted","Data":"3d844cf5bb7e95ce37eeb926eb8f1bf1395e1056b2c5d40aa6bfc060a446979a"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.296471 4982 generic.go:334] "Generic (PLEG): container finished" podID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerID="6ea7ac2c9eaee9dbb02d9413c13603b37f716bd79cadd224b8a2999f2f737e5a" exitCode=2 Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.296556 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1","Type":"ContainerDied","Data":"6ea7ac2c9eaee9dbb02d9413c13603b37f716bd79cadd224b8a2999f2f737e5a"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.298533 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5466bd89fd-vjk4t" event={"ID":"ba8a29e9-d071-470e-80eb-8c749b582614","Type":"ContainerStarted","Data":"1d4279d1ea20d6f59aa7ed1f6e7bebd592b23f2822b3d6dcae75925353d6f30b"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.298572 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5466bd89fd-vjk4t" event={"ID":"ba8a29e9-d071-470e-80eb-8c749b582614","Type":"ContainerStarted","Data":"6ae03eb8bb654c7568ba667feca30d6138d9aba155653d4de5b91987819216c5"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.298665 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.298681 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.300787 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" event={"ID":"17fd2f10-64fe-4299-9f45-b81e02687f53","Type":"ContainerStarted","Data":"fc87c443b7e171f37345b42428da2720ce6985efee9de2583b4fbb3835b8099a"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.300819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" event={"ID":"17fd2f10-64fe-4299-9f45-b81e02687f53","Type":"ContainerStarted","Data":"6c570c6d1500f5ebbfd10fc58143b8b3bc1e1df217e2bfe6f51bbd18f5aa61a8"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.306351 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" event={"ID":"ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582","Type":"ContainerDied","Data":"2dc01484d8268caae737c06f748fd0607d3c4ff2a888f155714f170d003512d8"} Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.306394 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-k7ffj" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.306426 4982 scope.go:117] "RemoveContainer" containerID="4929189ce037909806b050c77005b595c67eabda9288f1c7eecdffff5de02703" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.317378 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-77f5949dd7-rps9v" podStartSLOduration=3.956677456 podStartE2EDuration="22.317355696s" podCreationTimestamp="2026-02-24 15:13:44 +0000 UTC" firstStartedPulling="2026-02-24 15:13:46.572113126 +0000 UTC m=+1488.191171619" lastFinishedPulling="2026-02-24 15:14:04.932791366 +0000 UTC m=+1506.551849859" observedRunningTime="2026-02-24 15:14:06.305558064 +0000 UTC m=+1507.924616557" watchObservedRunningTime="2026-02-24 15:14:06.317355696 +0000 UTC m=+1507.936414189" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.349398 4982 scope.go:117] "RemoveContainer" containerID="5266529288ba1b8db3c86e52b2f94ff4322024b55d3e3c944dace38e874dcf7a" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.356908 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-77797c9b78-rvhwv" podStartSLOduration=4.100678073 podStartE2EDuration="22.356889172s" podCreationTimestamp="2026-02-24 15:13:44 +0000 UTC" firstStartedPulling="2026-02-24 15:13:46.658247638 +0000 UTC m=+1488.277306131" lastFinishedPulling="2026-02-24 15:14:04.914458737 +0000 UTC m=+1506.533517230" observedRunningTime="2026-02-24 15:14:06.335365227 +0000 UTC m=+1507.954423740" watchObservedRunningTime="2026-02-24 15:14:06.356889172 +0000 UTC m=+1507.975947665" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.362707 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5466bd89fd-vjk4t" podStartSLOduration=17.36269088 podStartE2EDuration="17.36269088s" podCreationTimestamp="2026-02-24 15:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:06.351389003 +0000 UTC m=+1507.970447516" watchObservedRunningTime="2026-02-24 15:14:06.36269088 +0000 UTC m=+1507.981749373" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.403930 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-k7ffj"] Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.427771 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-k7ffj"] Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.466523 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-75468f4444-6cfm8" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.468734 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-75468f4444-6cfm8" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:14:06 crc kubenswrapper[4982]: I0224 15:14:06.468850 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-75468f4444-6cfm8" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:14:07 crc kubenswrapper[4982]: I0224 15:14:07.163241 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" path="/var/lib/kubelet/pods/ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582/volumes" Feb 24 15:14:07 crc kubenswrapper[4982]: I0224 15:14:07.321587 4982 generic.go:334] "Generic (PLEG): container finished" podID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerID="989a0da191dd8d7d31472c1820c66f5063e71ee1d394f41d90252bd2dd9703ac" exitCode=0 Feb 24 15:14:07 crc kubenswrapper[4982]: I0224 15:14:07.321794 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1","Type":"ContainerDied","Data":"989a0da191dd8d7d31472c1820c66f5063e71ee1d394f41d90252bd2dd9703ac"} Feb 24 15:14:07 crc kubenswrapper[4982]: I0224 15:14:07.950072 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.137650 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-config-data\") pod \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.138192 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-log-httpd\") pod \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.138262 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-run-httpd\") pod \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.138302 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-sg-core-conf-yaml\") pod \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.138369 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfwfv\" (UniqueName: \"kubernetes.io/projected/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-kube-api-access-qfwfv\") pod \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.138396 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-combined-ca-bundle\") pod \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.138447 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-scripts\") pod \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\" (UID: \"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1\") " Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.144206 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" (UID: "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.145476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" (UID: "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.147169 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-kube-api-access-qfwfv" (OuterVolumeSpecName: "kube-api-access-qfwfv") pod "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" (UID: "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1"). InnerVolumeSpecName "kube-api-access-qfwfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.149165 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-scripts" (OuterVolumeSpecName: "scripts") pod "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" (UID: "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.186575 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" (UID: "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.188707 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-config-data" (OuterVolumeSpecName: "config-data") pod "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" (UID: "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.188754 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" (UID: "9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.242994 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.243292 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.243312 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.243321 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.243328 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.243339 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfwfv\" (UniqueName: \"kubernetes.io/projected/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-kube-api-access-qfwfv\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.243347 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.334490 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" event={"ID":"a9f6a4ae-f033-46bd-8f3b-5c625216a825","Type":"ContainerStarted","Data":"487268aa12812e782accb6056345b31223df0c570dcf4cd02d6f35a84ac9ff01"} Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.336417 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1","Type":"ContainerDied","Data":"7d03941a6f2aa5e37acabe32ee50b9fef6b8f33d2394be65fd85f4958a68f015"} Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.336459 4982 scope.go:117] "RemoveContainer" containerID="6ea7ac2c9eaee9dbb02d9413c13603b37f716bd79cadd224b8a2999f2f737e5a" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.336524 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.352764 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" podStartSLOduration=6.682290486 podStartE2EDuration="8.352745931s" podCreationTimestamp="2026-02-24 15:14:00 +0000 UTC" firstStartedPulling="2026-02-24 15:14:05.72710192 +0000 UTC m=+1507.346160413" lastFinishedPulling="2026-02-24 15:14:07.397557365 +0000 UTC m=+1509.016615858" observedRunningTime="2026-02-24 15:14:08.350947822 +0000 UTC m=+1509.970006315" watchObservedRunningTime="2026-02-24 15:14:08.352745931 +0000 UTC m=+1509.971804424" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.369200 4982 scope.go:117] "RemoveContainer" containerID="989a0da191dd8d7d31472c1820c66f5063e71ee1d394f41d90252bd2dd9703ac" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.421988 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.454714 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.464980 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:08 crc kubenswrapper[4982]: E0224 15:14:08.465520 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerName="sg-core" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.465545 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerName="sg-core" Feb 24 15:14:08 crc kubenswrapper[4982]: E0224 15:14:08.465577 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerName="ceilometer-notification-agent" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.465586 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerName="ceilometer-notification-agent" Feb 24 15:14:08 crc kubenswrapper[4982]: E0224 15:14:08.465615 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerName="dnsmasq-dns" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.465623 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerName="dnsmasq-dns" Feb 24 15:14:08 crc kubenswrapper[4982]: E0224 15:14:08.465647 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerName="init" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.465655 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerName="init" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.465891 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerName="ceilometer-notification-agent" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.465912 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7f3625-b3c9-4e34-8b4c-8bcaa44a1582" containerName="dnsmasq-dns" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.465942 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" containerName="sg-core" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.468351 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.471609 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.471848 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.475056 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.525787 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-65698f94df-gbcsr" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.526008 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-65698f94df-gbcsr" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.528275 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-65698f94df-gbcsr" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.652669 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.652763 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-config-data\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.652835 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-run-httpd\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.652907 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-log-httpd\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.652986 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-scripts\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.653029 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.653089 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk64n\" (UniqueName: \"kubernetes.io/projected/fdbad9e7-6972-4ede-a463-6c390f50202f-kube-api-access-bk64n\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.738555 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.738903 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.755441 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-scripts\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.755569 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.755648 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk64n\" (UniqueName: \"kubernetes.io/projected/fdbad9e7-6972-4ede-a463-6c390f50202f-kube-api-access-bk64n\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.755847 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.755945 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-config-data\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.756051 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-run-httpd\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.756095 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-log-httpd\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.756956 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-run-httpd\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.756979 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-log-httpd\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.765122 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.770560 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.771995 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-config-data\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.774799 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-scripts\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.780219 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk64n\" (UniqueName: \"kubernetes.io/projected/fdbad9e7-6972-4ede-a463-6c390f50202f-kube-api-access-bk64n\") pod \"ceilometer-0\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " pod="openstack/ceilometer-0" Feb 24 15:14:08 crc kubenswrapper[4982]: I0224 15:14:08.785288 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:09 crc kubenswrapper[4982]: I0224 15:14:09.163310 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1" path="/var/lib/kubelet/pods/9a30c971-5fb4-4ed3-a23e-b4b0b68bd2e1/volumes" Feb 24 15:14:09 crc kubenswrapper[4982]: I0224 15:14:09.453064 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:10 crc kubenswrapper[4982]: I0224 15:14:10.384870 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerStarted","Data":"dd9fd9d3d086fca37c0c38f447e9eff337ab43673666d74e6fbcd4087b35262e"} Feb 24 15:14:11 crc kubenswrapper[4982]: I0224 15:14:11.416625 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerStarted","Data":"8b8921a15d957d8c67441edf16deca3f2f24f204e4f435a5eb4d3b9c326fb614"} Feb 24 15:14:11 crc kubenswrapper[4982]: I0224 15:14:11.416975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerStarted","Data":"ce385b767124189a353ece95a98b8ab31d0acbf4314723d5b59b4e3194e0b165"} Feb 24 15:14:12 crc kubenswrapper[4982]: I0224 15:14:12.428119 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerStarted","Data":"37c0f5cbeca2aa684cd5488275807e6f6be2aef5bec20148e3b035f05251fedf"} Feb 24 15:14:12 crc kubenswrapper[4982]: I0224 15:14:12.429902 4982 generic.go:334] "Generic (PLEG): container finished" podID="a9f6a4ae-f033-46bd-8f3b-5c625216a825" containerID="487268aa12812e782accb6056345b31223df0c570dcf4cd02d6f35a84ac9ff01" exitCode=0 Feb 24 15:14:12 crc kubenswrapper[4982]: I0224 15:14:12.429941 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" event={"ID":"a9f6a4ae-f033-46bd-8f3b-5c625216a825","Type":"ContainerDied","Data":"487268aa12812e782accb6056345b31223df0c570dcf4cd02d6f35a84ac9ff01"} Feb 24 15:14:13 crc kubenswrapper[4982]: I0224 15:14:13.878464 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" Feb 24 15:14:13 crc kubenswrapper[4982]: I0224 15:14:13.976194 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2thp6\" (UniqueName: \"kubernetes.io/projected/a9f6a4ae-f033-46bd-8f3b-5c625216a825-kube-api-access-2thp6\") pod \"a9f6a4ae-f033-46bd-8f3b-5c625216a825\" (UID: \"a9f6a4ae-f033-46bd-8f3b-5c625216a825\") " Feb 24 15:14:13 crc kubenswrapper[4982]: I0224 15:14:13.984792 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f6a4ae-f033-46bd-8f3b-5c625216a825-kube-api-access-2thp6" (OuterVolumeSpecName: "kube-api-access-2thp6") pod "a9f6a4ae-f033-46bd-8f3b-5c625216a825" (UID: "a9f6a4ae-f033-46bd-8f3b-5c625216a825"). InnerVolumeSpecName "kube-api-access-2thp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:14 crc kubenswrapper[4982]: I0224 15:14:14.079171 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2thp6\" (UniqueName: \"kubernetes.io/projected/a9f6a4ae-f033-46bd-8f3b-5c625216a825-kube-api-access-2thp6\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:14 crc kubenswrapper[4982]: I0224 15:14:14.452051 4982 generic.go:334] "Generic (PLEG): container finished" podID="9d6a585f-95fd-47f7-a809-cceee8a3644a" containerID="c17b59326500174f6a2d0f9566794193490e9a65eb23870cbc37de0748469c6f" exitCode=0 Feb 24 15:14:14 crc kubenswrapper[4982]: I0224 15:14:14.452158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7dmfp" event={"ID":"9d6a585f-95fd-47f7-a809-cceee8a3644a","Type":"ContainerDied","Data":"c17b59326500174f6a2d0f9566794193490e9a65eb23870cbc37de0748469c6f"} Feb 24 15:14:14 crc kubenswrapper[4982]: I0224 15:14:14.455757 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" event={"ID":"a9f6a4ae-f033-46bd-8f3b-5c625216a825","Type":"ContainerDied","Data":"3d844cf5bb7e95ce37eeb926eb8f1bf1395e1056b2c5d40aa6bfc060a446979a"} Feb 24 15:14:14 crc kubenswrapper[4982]: I0224 15:14:14.455798 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d844cf5bb7e95ce37eeb926eb8f1bf1395e1056b2c5d40aa6bfc060a446979a" Feb 24 15:14:14 crc kubenswrapper[4982]: I0224 15:14:14.455818 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532434-bt9vj" Feb 24 15:14:14 crc kubenswrapper[4982]: I0224 15:14:14.516989 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532428-wdr4v"] Feb 24 15:14:14 crc kubenswrapper[4982]: I0224 15:14:14.527715 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532428-wdr4v"] Feb 24 15:14:15 crc kubenswrapper[4982]: I0224 15:14:15.180032 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370d1fb7-83d9-4153-ac92-24d499c15b76" path="/var/lib/kubelet/pods/370d1fb7-83d9-4153-ac92-24d499c15b76/volumes" Feb 24 15:14:15 crc kubenswrapper[4982]: I0224 15:14:15.468923 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerStarted","Data":"b8b9eade38dc78fab0c669ba24f8f57dc9b6d72839a876e104ba98b2acec23f4"} Feb 24 15:14:15 crc kubenswrapper[4982]: I0224 15:14:15.504882 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.887161901 podStartE2EDuration="7.504855419s" podCreationTimestamp="2026-02-24 15:14:08 +0000 UTC" firstStartedPulling="2026-02-24 15:14:09.458464125 +0000 UTC m=+1511.077522618" lastFinishedPulling="2026-02-24 15:14:15.076157643 +0000 UTC m=+1516.695216136" observedRunningTime="2026-02-24 15:14:15.491900256 +0000 UTC m=+1517.110958749" watchObservedRunningTime="2026-02-24 15:14:15.504855419 +0000 UTC m=+1517.123913912" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.012522 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.128083 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-config-data\") pod \"9d6a585f-95fd-47f7-a809-cceee8a3644a\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.128295 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d6a585f-95fd-47f7-a809-cceee8a3644a-etc-machine-id\") pod \"9d6a585f-95fd-47f7-a809-cceee8a3644a\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.128400 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-scripts\") pod \"9d6a585f-95fd-47f7-a809-cceee8a3644a\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.128465 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdb2g\" (UniqueName: \"kubernetes.io/projected/9d6a585f-95fd-47f7-a809-cceee8a3644a-kube-api-access-tdb2g\") pod \"9d6a585f-95fd-47f7-a809-cceee8a3644a\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.128530 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-db-sync-config-data\") pod \"9d6a585f-95fd-47f7-a809-cceee8a3644a\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.128626 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-combined-ca-bundle\") pod \"9d6a585f-95fd-47f7-a809-cceee8a3644a\" (UID: \"9d6a585f-95fd-47f7-a809-cceee8a3644a\") " Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.131006 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d6a585f-95fd-47f7-a809-cceee8a3644a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9d6a585f-95fd-47f7-a809-cceee8a3644a" (UID: "9d6a585f-95fd-47f7-a809-cceee8a3644a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.136107 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6a585f-95fd-47f7-a809-cceee8a3644a-kube-api-access-tdb2g" (OuterVolumeSpecName: "kube-api-access-tdb2g") pod "9d6a585f-95fd-47f7-a809-cceee8a3644a" (UID: "9d6a585f-95fd-47f7-a809-cceee8a3644a"). InnerVolumeSpecName "kube-api-access-tdb2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.153797 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-scripts" (OuterVolumeSpecName: "scripts") pod "9d6a585f-95fd-47f7-a809-cceee8a3644a" (UID: "9d6a585f-95fd-47f7-a809-cceee8a3644a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.154128 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9d6a585f-95fd-47f7-a809-cceee8a3644a" (UID: "9d6a585f-95fd-47f7-a809-cceee8a3644a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.161296 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d6a585f-95fd-47f7-a809-cceee8a3644a" (UID: "9d6a585f-95fd-47f7-a809-cceee8a3644a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.203151 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-config-data" (OuterVolumeSpecName: "config-data") pod "9d6a585f-95fd-47f7-a809-cceee8a3644a" (UID: "9d6a585f-95fd-47f7-a809-cceee8a3644a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.231539 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.231581 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d6a585f-95fd-47f7-a809-cceee8a3644a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.231593 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.231605 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdb2g\" (UniqueName: \"kubernetes.io/projected/9d6a585f-95fd-47f7-a809-cceee8a3644a-kube-api-access-tdb2g\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.231619 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.231632 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d6a585f-95fd-47f7-a809-cceee8a3644a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.481677 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7dmfp" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.481683 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7dmfp" event={"ID":"9d6a585f-95fd-47f7-a809-cceee8a3644a","Type":"ContainerDied","Data":"f386edaa6748cb5ebcfeae8b2f5ba7c285a8d49e063c10fbeb2c50301af3c6ba"} Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.482064 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f386edaa6748cb5ebcfeae8b2f5ba7c285a8d49e063c10fbeb2c50301af3c6ba" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.482157 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.914100 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:16 crc kubenswrapper[4982]: E0224 15:14:16.914545 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f6a4ae-f033-46bd-8f3b-5c625216a825" containerName="oc" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.914563 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f6a4ae-f033-46bd-8f3b-5c625216a825" containerName="oc" Feb 24 15:14:16 crc kubenswrapper[4982]: E0224 15:14:16.914600 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6a585f-95fd-47f7-a809-cceee8a3644a" containerName="cinder-db-sync" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.914607 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6a585f-95fd-47f7-a809-cceee8a3644a" containerName="cinder-db-sync" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.914821 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f6a4ae-f033-46bd-8f3b-5c625216a825" containerName="oc" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.914841 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6a585f-95fd-47f7-a809-cceee8a3644a" containerName="cinder-db-sync" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.916043 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.923395 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.923631 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wtghw" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.923767 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.923971 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.943045 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.958782 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.958831 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrg6\" (UniqueName: \"kubernetes.io/projected/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-kube-api-access-jqrg6\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.958871 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.958914 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.958946 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:16 crc kubenswrapper[4982]: I0224 15:14:16.958962 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.034734 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g5cjx"] Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.036936 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.060950 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.061001 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrg6\" (UniqueName: \"kubernetes.io/projected/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-kube-api-access-jqrg6\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.061037 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.061078 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.061109 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.061125 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.069221 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g5cjx"] Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.069511 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.072086 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.072098 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.072392 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.075050 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.107016 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrg6\" (UniqueName: \"kubernetes.io/projected/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-kube-api-access-jqrg6\") pod \"cinder-scheduler-0\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.163017 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.163134 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6n4p\" (UniqueName: \"kubernetes.io/projected/7043c05a-72b5-47d9-a561-c562f82ae807-kube-api-access-f6n4p\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.163159 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.163224 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.163267 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.163313 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-config\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.244367 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.246169 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.251811 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.251926 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.259931 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.265729 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.265809 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.265851 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-config\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.265955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.266056 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6n4p\" (UniqueName: \"kubernetes.io/projected/7043c05a-72b5-47d9-a561-c562f82ae807-kube-api-access-f6n4p\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.266127 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.268283 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.268874 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.272986 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.273042 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.298225 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-config\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.320580 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6n4p\" (UniqueName: \"kubernetes.io/projected/7043c05a-72b5-47d9-a561-c562f82ae807-kube-api-access-f6n4p\") pod \"dnsmasq-dns-6578955fd5-g5cjx\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.358876 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.371956 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rgn9\" (UniqueName: \"kubernetes.io/projected/26101d60-8191-4669-92ab-200740ed3cf8-kube-api-access-2rgn9\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.372029 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.372101 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.372196 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/26101d60-8191-4669-92ab-200740ed3cf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.372289 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26101d60-8191-4669-92ab-200740ed3cf8-logs\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.372384 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-scripts\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.372426 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.485483 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rgn9\" (UniqueName: \"kubernetes.io/projected/26101d60-8191-4669-92ab-200740ed3cf8-kube-api-access-2rgn9\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.485905 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.485980 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.486079 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/26101d60-8191-4669-92ab-200740ed3cf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.486162 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26101d60-8191-4669-92ab-200740ed3cf8-logs\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.486185 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-scripts\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.486208 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.488991 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/26101d60-8191-4669-92ab-200740ed3cf8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.489797 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26101d60-8191-4669-92ab-200740ed3cf8-logs\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.494647 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.495700 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.525447 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data-custom\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.535380 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-scripts\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.551735 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rgn9\" (UniqueName: \"kubernetes.io/projected/26101d60-8191-4669-92ab-200740ed3cf8-kube-api-access-2rgn9\") pod \"cinder-api-0\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.701781 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.781897 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.818251 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5466bd89fd-vjk4t" Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.913355 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78bc8447b6-wqsr4"] Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.913624 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78bc8447b6-wqsr4" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api-log" containerID="cri-o://b6896e14f215ce0653b4a1acfa503455ce4c8d01e117663140c06e0178677844" gracePeriod=30 Feb 24 15:14:17 crc kubenswrapper[4982]: I0224 15:14:17.913823 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78bc8447b6-wqsr4" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api" containerID="cri-o://8c1d16c9e4dea6ffd09e69a8010ba0cfa359df15d24558375d3d5feb0da4b9bb" gracePeriod=30 Feb 24 15:14:18 crc kubenswrapper[4982]: I0224 15:14:17.995382 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:18 crc kubenswrapper[4982]: I0224 15:14:18.153961 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-66dcfd46c4-xzlpx" Feb 24 15:14:18 crc kubenswrapper[4982]: I0224 15:14:18.558645 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g5cjx"] Feb 24 15:14:18 crc kubenswrapper[4982]: I0224 15:14:18.623177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e","Type":"ContainerStarted","Data":"08fdf2a0435d47c706ce0ccc3fb3ec560839e92dead2000ba7a5b8c9754fa139"} Feb 24 15:14:18 crc kubenswrapper[4982]: I0224 15:14:18.625622 4982 generic.go:334] "Generic (PLEG): container finished" podID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerID="b6896e14f215ce0653b4a1acfa503455ce4c8d01e117663140c06e0178677844" exitCode=143 Feb 24 15:14:18 crc kubenswrapper[4982]: I0224 15:14:18.625701 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78bc8447b6-wqsr4" event={"ID":"08ab5470-a7ff-4276-86f0-0fb046f32b03","Type":"ContainerDied","Data":"b6896e14f215ce0653b4a1acfa503455ce4c8d01e117663140c06e0178677844"} Feb 24 15:14:18 crc kubenswrapper[4982]: I0224 15:14:18.649161 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" event={"ID":"7043c05a-72b5-47d9-a561-c562f82ae807","Type":"ContainerStarted","Data":"c2ba41113c4eef84072fdc1f268654796009aa0f612247b179c26ba4083fcae9"} Feb 24 15:14:18 crc kubenswrapper[4982]: I0224 15:14:18.700696 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:19 crc kubenswrapper[4982]: I0224 15:14:19.086641 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:14:19 crc kubenswrapper[4982]: I0224 15:14:19.092371 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6557c4cbb4-n6w8z" Feb 24 15:14:19 crc kubenswrapper[4982]: I0224 15:14:19.650437 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:19 crc kubenswrapper[4982]: I0224 15:14:19.698738 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26101d60-8191-4669-92ab-200740ed3cf8","Type":"ContainerStarted","Data":"2628423c2798ebd04134cf540e1f6321e981ad32cca164632598b27c6a143c45"} Feb 24 15:14:19 crc kubenswrapper[4982]: I0224 15:14:19.739541 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" event={"ID":"7043c05a-72b5-47d9-a561-c562f82ae807","Type":"ContainerStarted","Data":"4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd"} Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.243649 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75468f4444-6cfm8"] Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.243884 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75468f4444-6cfm8" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-api" containerID="cri-o://774b93cbf56825fca1510cd4e5cc4eb4c289eb871fa8d6a73eaaed170b3f6bc4" gracePeriod=30 Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.244553 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75468f4444-6cfm8" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-httpd" containerID="cri-o://d2c03da9315f1b9421e864c1c0d4dad38478cc071f11beb58b7d41896ee75bdf" gracePeriod=30 Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.268525 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f7b97458f-lsgqv"] Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.272908 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.289019 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f7b97458f-lsgqv"] Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.305849 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-75468f4444-6cfm8" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.195:9696/\": EOF" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.329745 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-public-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.329809 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-httpd-config\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.329831 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng9g\" (UniqueName: \"kubernetes.io/projected/ace3b91f-7d2e-405d-a191-1260b2def481-kube-api-access-tng9g\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.329896 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-internal-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.330009 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-config\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.330284 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-combined-ca-bundle\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.330397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-ovndb-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.431898 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-public-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.431977 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-httpd-config\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.432003 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tng9g\" (UniqueName: \"kubernetes.io/projected/ace3b91f-7d2e-405d-a191-1260b2def481-kube-api-access-tng9g\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.432054 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-internal-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.432110 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-config\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.432182 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-combined-ca-bundle\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.432240 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-ovndb-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.440439 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-internal-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.441841 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-httpd-config\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.441974 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-public-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.442470 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-config\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.444158 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-ovndb-tls-certs\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.455179 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tng9g\" (UniqueName: \"kubernetes.io/projected/ace3b91f-7d2e-405d-a191-1260b2def481-kube-api-access-tng9g\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.461564 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace3b91f-7d2e-405d-a191-1260b2def481-combined-ca-bundle\") pod \"neutron-6f7b97458f-lsgqv\" (UID: \"ace3b91f-7d2e-405d-a191-1260b2def481\") " pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.647086 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.784687 4982 generic.go:334] "Generic (PLEG): container finished" podID="7043c05a-72b5-47d9-a561-c562f82ae807" containerID="4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd" exitCode=0 Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.785305 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" event={"ID":"7043c05a-72b5-47d9-a561-c562f82ae807","Type":"ContainerDied","Data":"4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd"} Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.892974 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75468f4444-6cfm8" event={"ID":"1f5da815-46e1-4224-bd34-feb1cdb54446","Type":"ContainerDied","Data":"d2c03da9315f1b9421e864c1c0d4dad38478cc071f11beb58b7d41896ee75bdf"} Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.893097 4982 generic.go:334] "Generic (PLEG): container finished" podID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerID="d2c03da9315f1b9421e864c1c0d4dad38478cc071f11beb58b7d41896ee75bdf" exitCode=0 Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.904215 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e","Type":"ContainerStarted","Data":"0ddd0fa30d31a5ee8f00ee221550cb7d465a0c78fb0b81e27eb57047a390b583"} Feb 24 15:14:20 crc kubenswrapper[4982]: I0224 15:14:20.917068 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26101d60-8191-4669-92ab-200740ed3cf8","Type":"ContainerStarted","Data":"975184eaf128e0f78606254c21956873d47cf17215c7f3d1b586dddcc9a0cd73"} Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.238651 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78bc8447b6-wqsr4" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.204:9311/healthcheck\": read tcp 10.217.0.2:49296->10.217.0.204:9311: read: connection reset by peer" Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.238725 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78bc8447b6-wqsr4" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.204:9311/healthcheck\": read tcp 10.217.0.2:49310->10.217.0.204:9311: read: connection reset by peer" Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.421347 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f7b97458f-lsgqv"] Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.935466 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7b97458f-lsgqv" event={"ID":"ace3b91f-7d2e-405d-a191-1260b2def481","Type":"ContainerStarted","Data":"641aebd5bbb9d2a55992e4ac225f3b00bdf320cf920023a7d6c7d4e01541c052"} Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.942966 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e","Type":"ContainerStarted","Data":"6edac1320a3854fa2a5629c14c686f18e776ae387ac8068bf88ec54c5e861229"} Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.959929 4982 generic.go:334] "Generic (PLEG): container finished" podID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerID="8c1d16c9e4dea6ffd09e69a8010ba0cfa359df15d24558375d3d5feb0da4b9bb" exitCode=0 Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.960015 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78bc8447b6-wqsr4" event={"ID":"08ab5470-a7ff-4276-86f0-0fb046f32b03","Type":"ContainerDied","Data":"8c1d16c9e4dea6ffd09e69a8010ba0cfa359df15d24558375d3d5feb0da4b9bb"} Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.960080 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78bc8447b6-wqsr4" event={"ID":"08ab5470-a7ff-4276-86f0-0fb046f32b03","Type":"ContainerDied","Data":"f6e792a36ffff0a4c915288c79cc26b0b62b76f4c00ea99e971f70f336f2ed71"} Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.960099 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6e792a36ffff0a4c915288c79cc26b0b62b76f4c00ea99e971f70f336f2ed71" Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.963420 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26101d60-8191-4669-92ab-200740ed3cf8","Type":"ContainerStarted","Data":"616a12ba6b7ffa85d5af7a82f71f9af9d5a280686817ea3d4e410492ba111475"} Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.963538 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api-log" containerID="cri-o://975184eaf128e0f78606254c21956873d47cf17215c7f3d1b586dddcc9a0cd73" gracePeriod=30 Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.963604 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.963640 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api" containerID="cri-o://616a12ba6b7ffa85d5af7a82f71f9af9d5a280686817ea3d4e410492ba111475" gracePeriod=30 Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.969854 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.8953671199999995 podStartE2EDuration="5.969836453s" podCreationTimestamp="2026-02-24 15:14:16 +0000 UTC" firstStartedPulling="2026-02-24 15:14:18.061779497 +0000 UTC m=+1519.680837990" lastFinishedPulling="2026-02-24 15:14:19.13624884 +0000 UTC m=+1520.755307323" observedRunningTime="2026-02-24 15:14:21.968451755 +0000 UTC m=+1523.587510248" watchObservedRunningTime="2026-02-24 15:14:21.969836453 +0000 UTC m=+1523.588894946" Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.987607 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" event={"ID":"7043c05a-72b5-47d9-a561-c562f82ae807","Type":"ContainerStarted","Data":"7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31"} Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.988444 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:21 crc kubenswrapper[4982]: I0224 15:14:21.995576 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.995555983 podStartE2EDuration="4.995555983s" podCreationTimestamp="2026-02-24 15:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:21.991948176 +0000 UTC m=+1523.611006669" watchObservedRunningTime="2026-02-24 15:14:21.995555983 +0000 UTC m=+1523.614614476" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.020821 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" podStartSLOduration=6.020804421 podStartE2EDuration="6.020804421s" podCreationTimestamp="2026-02-24 15:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:22.017240864 +0000 UTC m=+1523.636299347" watchObservedRunningTime="2026-02-24 15:14:22.020804421 +0000 UTC m=+1523.639862914" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.021012 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.104918 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data\") pod \"08ab5470-a7ff-4276-86f0-0fb046f32b03\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.105035 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data-custom\") pod \"08ab5470-a7ff-4276-86f0-0fb046f32b03\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.105066 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gljh6\" (UniqueName: \"kubernetes.io/projected/08ab5470-a7ff-4276-86f0-0fb046f32b03-kube-api-access-gljh6\") pod \"08ab5470-a7ff-4276-86f0-0fb046f32b03\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.105099 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ab5470-a7ff-4276-86f0-0fb046f32b03-logs\") pod \"08ab5470-a7ff-4276-86f0-0fb046f32b03\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.105256 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-combined-ca-bundle\") pod \"08ab5470-a7ff-4276-86f0-0fb046f32b03\" (UID: \"08ab5470-a7ff-4276-86f0-0fb046f32b03\") " Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.111661 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08ab5470-a7ff-4276-86f0-0fb046f32b03-logs" (OuterVolumeSpecName: "logs") pod "08ab5470-a7ff-4276-86f0-0fb046f32b03" (UID: "08ab5470-a7ff-4276-86f0-0fb046f32b03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.117063 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ab5470-a7ff-4276-86f0-0fb046f32b03-kube-api-access-gljh6" (OuterVolumeSpecName: "kube-api-access-gljh6") pod "08ab5470-a7ff-4276-86f0-0fb046f32b03" (UID: "08ab5470-a7ff-4276-86f0-0fb046f32b03"). InnerVolumeSpecName "kube-api-access-gljh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.127697 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08ab5470-a7ff-4276-86f0-0fb046f32b03" (UID: "08ab5470-a7ff-4276-86f0-0fb046f32b03"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.183324 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08ab5470-a7ff-4276-86f0-0fb046f32b03" (UID: "08ab5470-a7ff-4276-86f0-0fb046f32b03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.207607 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data" (OuterVolumeSpecName: "config-data") pod "08ab5470-a7ff-4276-86f0-0fb046f32b03" (UID: "08ab5470-a7ff-4276-86f0-0fb046f32b03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.208925 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.209016 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.209087 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gljh6\" (UniqueName: \"kubernetes.io/projected/08ab5470-a7ff-4276-86f0-0fb046f32b03-kube-api-access-gljh6\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.209144 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ab5470-a7ff-4276-86f0-0fb046f32b03-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.209197 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ab5470-a7ff-4276-86f0-0fb046f32b03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.255063 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.416301 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 24 15:14:22 crc kubenswrapper[4982]: E0224 15:14:22.416998 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api-log" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.417085 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api-log" Feb 24 15:14:22 crc kubenswrapper[4982]: E0224 15:14:22.417172 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.417224 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.417467 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api-log" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.417568 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" containerName="barbican-api" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.418337 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.424662 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6j7v6" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.428231 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.428486 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.453403 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.517566 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config-secret\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.518131 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.518259 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.518290 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlz9\" (UniqueName: \"kubernetes.io/projected/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-kube-api-access-2dlz9\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.620220 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.620288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlz9\" (UniqueName: \"kubernetes.io/projected/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-kube-api-access-2dlz9\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.620387 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config-secret\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.620412 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.621350 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.627126 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config-secret\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.647182 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.650997 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlz9\" (UniqueName: \"kubernetes.io/projected/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-kube-api-access-2dlz9\") pod \"openstackclient\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.740257 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.775726 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.805520 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.854239 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.856484 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.891711 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.933438 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/546fb62a-ffa6-4067-b267-ea1ff18ee76e-openstack-config-secret\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.933625 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546fb62a-ffa6-4067-b267-ea1ff18ee76e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.933731 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/546fb62a-ffa6-4067-b267-ea1ff18ee76e-openstack-config\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: I0224 15:14:22.933774 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6l5g\" (UniqueName: \"kubernetes.io/projected/546fb62a-ffa6-4067-b267-ea1ff18ee76e-kube-api-access-s6l5g\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:22 crc kubenswrapper[4982]: E0224 15:14:22.993739 4982 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 24 15:14:22 crc kubenswrapper[4982]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_288a42a3-f8aa-4130-b0e3-d7425cbe0ea1_0(dcfa00783c046c175048a9e093196c4b4257203a61ec7a12736e9ca133c614a6): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dcfa00783c046c175048a9e093196c4b4257203a61ec7a12736e9ca133c614a6" Netns:"/var/run/netns/61a52e67-645c-44c7-a77d-d9f360a54e5d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=dcfa00783c046c175048a9e093196c4b4257203a61ec7a12736e9ca133c614a6;K8S_POD_UID=288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1]: expected pod UID "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" but got "546fb62a-ffa6-4067-b267-ea1ff18ee76e" from Kube API Feb 24 15:14:22 crc kubenswrapper[4982]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 24 15:14:22 crc kubenswrapper[4982]: > Feb 24 15:14:22 crc kubenswrapper[4982]: E0224 15:14:22.993814 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 24 15:14:22 crc kubenswrapper[4982]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_288a42a3-f8aa-4130-b0e3-d7425cbe0ea1_0(dcfa00783c046c175048a9e093196c4b4257203a61ec7a12736e9ca133c614a6): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dcfa00783c046c175048a9e093196c4b4257203a61ec7a12736e9ca133c614a6" Netns:"/var/run/netns/61a52e67-645c-44c7-a77d-d9f360a54e5d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=dcfa00783c046c175048a9e093196c4b4257203a61ec7a12736e9ca133c614a6;K8S_POD_UID=288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1]: expected pod UID "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" but got "546fb62a-ffa6-4067-b267-ea1ff18ee76e" from Kube API Feb 24 15:14:22 crc kubenswrapper[4982]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 24 15:14:22 crc kubenswrapper[4982]: > pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.008025 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7b97458f-lsgqv" event={"ID":"ace3b91f-7d2e-405d-a191-1260b2def481","Type":"ContainerStarted","Data":"8b826626aa222f6a800d2a514131174af1cf8b2df3046c4fa533b9db3de6b38a"} Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.008068 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7b97458f-lsgqv" event={"ID":"ace3b91f-7d2e-405d-a191-1260b2def481","Type":"ContainerStarted","Data":"d8c7733ae449b0720f86a979d8f5347e3157615a4ae19c8df2d82d771af34082"} Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.009418 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.014479 4982 generic.go:334] "Generic (PLEG): container finished" podID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerID="774b93cbf56825fca1510cd4e5cc4eb4c289eb871fa8d6a73eaaed170b3f6bc4" exitCode=0 Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.014566 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75468f4444-6cfm8" event={"ID":"1f5da815-46e1-4224-bd34-feb1cdb54446","Type":"ContainerDied","Data":"774b93cbf56825fca1510cd4e5cc4eb4c289eb871fa8d6a73eaaed170b3f6bc4"} Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.025835 4982 generic.go:334] "Generic (PLEG): container finished" podID="26101d60-8191-4669-92ab-200740ed3cf8" containerID="975184eaf128e0f78606254c21956873d47cf17215c7f3d1b586dddcc9a0cd73" exitCode=143 Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.026898 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78bc8447b6-wqsr4" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.027064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26101d60-8191-4669-92ab-200740ed3cf8","Type":"ContainerDied","Data":"975184eaf128e0f78606254c21956873d47cf17215c7f3d1b586dddcc9a0cd73"} Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.030202 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f7b97458f-lsgqv" podStartSLOduration=3.030183641 podStartE2EDuration="3.030183641s" podCreationTimestamp="2026-02-24 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:23.028801154 +0000 UTC m=+1524.647859667" watchObservedRunningTime="2026-02-24 15:14:23.030183641 +0000 UTC m=+1524.649242134" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.037918 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546fb62a-ffa6-4067-b267-ea1ff18ee76e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.038170 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/546fb62a-ffa6-4067-b267-ea1ff18ee76e-openstack-config\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.038254 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6l5g\" (UniqueName: \"kubernetes.io/projected/546fb62a-ffa6-4067-b267-ea1ff18ee76e-kube-api-access-s6l5g\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.038328 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/546fb62a-ffa6-4067-b267-ea1ff18ee76e-openstack-config-secret\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.040144 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/546fb62a-ffa6-4067-b267-ea1ff18ee76e-openstack-config\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.044020 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/546fb62a-ffa6-4067-b267-ea1ff18ee76e-openstack-config-secret\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.047535 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546fb62a-ffa6-4067-b267-ea1ff18ee76e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.060391 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6l5g\" (UniqueName: \"kubernetes.io/projected/546fb62a-ffa6-4067-b267-ea1ff18ee76e-kube-api-access-s6l5g\") pod \"openstackclient\" (UID: \"546fb62a-ffa6-4067-b267-ea1ff18ee76e\") " pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.074915 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78bc8447b6-wqsr4"] Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.131997 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78bc8447b6-wqsr4"] Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.169987 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ab5470-a7ff-4276-86f0-0fb046f32b03" path="/var/lib/kubelet/pods/08ab5470-a7ff-4276-86f0-0fb046f32b03/volumes" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.258987 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.344684 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.445419 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-ovndb-tls-certs\") pod \"1f5da815-46e1-4224-bd34-feb1cdb54446\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.445489 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-config\") pod \"1f5da815-46e1-4224-bd34-feb1cdb54446\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.445694 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tghm6\" (UniqueName: \"kubernetes.io/projected/1f5da815-46e1-4224-bd34-feb1cdb54446-kube-api-access-tghm6\") pod \"1f5da815-46e1-4224-bd34-feb1cdb54446\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.445763 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-combined-ca-bundle\") pod \"1f5da815-46e1-4224-bd34-feb1cdb54446\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.445821 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-httpd-config\") pod \"1f5da815-46e1-4224-bd34-feb1cdb54446\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.451191 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5da815-46e1-4224-bd34-feb1cdb54446-kube-api-access-tghm6" (OuterVolumeSpecName: "kube-api-access-tghm6") pod "1f5da815-46e1-4224-bd34-feb1cdb54446" (UID: "1f5da815-46e1-4224-bd34-feb1cdb54446"). InnerVolumeSpecName "kube-api-access-tghm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.452673 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1f5da815-46e1-4224-bd34-feb1cdb54446" (UID: "1f5da815-46e1-4224-bd34-feb1cdb54446"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.546654 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f5da815-46e1-4224-bd34-feb1cdb54446" (UID: "1f5da815-46e1-4224-bd34-feb1cdb54446"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.548267 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-combined-ca-bundle\") pod \"1f5da815-46e1-4224-bd34-feb1cdb54446\" (UID: \"1f5da815-46e1-4224-bd34-feb1cdb54446\") " Feb 24 15:14:23 crc kubenswrapper[4982]: W0224 15:14:23.548371 4982 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1f5da815-46e1-4224-bd34-feb1cdb54446/volumes/kubernetes.io~secret/combined-ca-bundle Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.548393 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f5da815-46e1-4224-bd34-feb1cdb54446" (UID: "1f5da815-46e1-4224-bd34-feb1cdb54446"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.554303 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tghm6\" (UniqueName: \"kubernetes.io/projected/1f5da815-46e1-4224-bd34-feb1cdb54446-kube-api-access-tghm6\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.554345 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.555669 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.561530 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-config" (OuterVolumeSpecName: "config") pod "1f5da815-46e1-4224-bd34-feb1cdb54446" (UID: "1f5da815-46e1-4224-bd34-feb1cdb54446"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.585344 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.602671 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1f5da815-46e1-4224-bd34-feb1cdb54446" (UID: "1f5da815-46e1-4224-bd34-feb1cdb54446"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.661221 4982 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:23 crc kubenswrapper[4982]: I0224 15:14:23.661256 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f5da815-46e1-4224-bd34-feb1cdb54446-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.068096 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75468f4444-6cfm8" event={"ID":"1f5da815-46e1-4224-bd34-feb1cdb54446","Type":"ContainerDied","Data":"aa374e8489b93bcfe8474d40d27946b1c830ba8140485f2d9fbf40dfdfb1ca22"} Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.068573 4982 scope.go:117] "RemoveContainer" containerID="d2c03da9315f1b9421e864c1c0d4dad38478cc071f11beb58b7d41896ee75bdf" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.068138 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75468f4444-6cfm8" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.073823 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"546fb62a-ffa6-4067-b267-ea1ff18ee76e","Type":"ContainerStarted","Data":"36e0a69de2141a513f26bd0c67fee0f8100bb99a4ccf39537075a5eb16fb061b"} Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.073886 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.087984 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.095350 4982 scope.go:117] "RemoveContainer" containerID="774b93cbf56825fca1510cd4e5cc4eb4c289eb871fa8d6a73eaaed170b3f6bc4" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.104149 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" podUID="546fb62a-ffa6-4067-b267-ea1ff18ee76e" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.113099 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75468f4444-6cfm8"] Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.131787 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75468f4444-6cfm8"] Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.172774 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dlz9\" (UniqueName: \"kubernetes.io/projected/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-kube-api-access-2dlz9\") pod \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.172828 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config\") pod \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.173035 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config-secret\") pod \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.173164 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-combined-ca-bundle\") pod \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\" (UID: \"288a42a3-f8aa-4130-b0e3-d7425cbe0ea1\") " Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.173437 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" (UID: "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.174847 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.178690 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" (UID: "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.178889 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-kube-api-access-2dlz9" (OuterVolumeSpecName: "kube-api-access-2dlz9") pod "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" (UID: "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1"). InnerVolumeSpecName "kube-api-access-2dlz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.208489 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" (UID: "288a42a3-f8aa-4130-b0e3-d7425cbe0ea1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.276845 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.276884 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:24 crc kubenswrapper[4982]: I0224 15:14:24.276893 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dlz9\" (UniqueName: \"kubernetes.io/projected/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1-kube-api-access-2dlz9\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.069076 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-68cc98f8f7-kjwtc"] Feb 24 15:14:25 crc kubenswrapper[4982]: E0224 15:14:25.069886 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-api" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.069899 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-api" Feb 24 15:14:25 crc kubenswrapper[4982]: E0224 15:14:25.069936 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-httpd" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.069942 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-httpd" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.070139 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-httpd" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.070156 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" containerName="neutron-api" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.090329 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.093812 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-68cc98f8f7-kjwtc"] Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.093933 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.096837 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.096904 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.097181 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.152856 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" podUID="546fb62a-ffa6-4067-b267-ea1ff18ee76e" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.192470 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5da815-46e1-4224-bd34-feb1cdb54446" path="/var/lib/kubelet/pods/1f5da815-46e1-4224-bd34-feb1cdb54446/volumes" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.195729 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="288a42a3-f8aa-4130-b0e3-d7425cbe0ea1" path="/var/lib/kubelet/pods/288a42a3-f8aa-4130-b0e3-d7425cbe0ea1/volumes" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.197237 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c89910f5-6c21-4f91-a07f-5b17503b3882-run-httpd\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.197275 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c89910f5-6c21-4f91-a07f-5b17503b3882-log-httpd\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.197322 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-internal-tls-certs\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.197347 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-public-tls-certs\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.197361 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-combined-ca-bundle\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.197519 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-config-data\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.197543 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln7vc\" (UniqueName: \"kubernetes.io/projected/c89910f5-6c21-4f91-a07f-5b17503b3882-kube-api-access-ln7vc\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.199664 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c89910f5-6c21-4f91-a07f-5b17503b3882-etc-swift\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.305633 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-config-data\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.305717 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln7vc\" (UniqueName: \"kubernetes.io/projected/c89910f5-6c21-4f91-a07f-5b17503b3882-kube-api-access-ln7vc\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.305830 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c89910f5-6c21-4f91-a07f-5b17503b3882-etc-swift\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.305881 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c89910f5-6c21-4f91-a07f-5b17503b3882-run-httpd\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.305919 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c89910f5-6c21-4f91-a07f-5b17503b3882-log-httpd\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.306001 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-internal-tls-certs\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.306029 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-public-tls-certs\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.306054 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-combined-ca-bundle\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.306677 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c89910f5-6c21-4f91-a07f-5b17503b3882-log-httpd\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.307123 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c89910f5-6c21-4f91-a07f-5b17503b3882-run-httpd\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.312320 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-combined-ca-bundle\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.324926 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-internal-tls-certs\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.325046 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-public-tls-certs\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.325326 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c89910f5-6c21-4f91-a07f-5b17503b3882-etc-swift\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.333307 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89910f5-6c21-4f91-a07f-5b17503b3882-config-data\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.355447 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln7vc\" (UniqueName: \"kubernetes.io/projected/c89910f5-6c21-4f91-a07f-5b17503b3882-kube-api-access-ln7vc\") pod \"swift-proxy-68cc98f8f7-kjwtc\" (UID: \"c89910f5-6c21-4f91-a07f-5b17503b3882\") " pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:25 crc kubenswrapper[4982]: I0224 15:14:25.454465 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:26 crc kubenswrapper[4982]: I0224 15:14:26.070333 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-68cc98f8f7-kjwtc"] Feb 24 15:14:26 crc kubenswrapper[4982]: I0224 15:14:26.130286 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" event={"ID":"c89910f5-6c21-4f91-a07f-5b17503b3882","Type":"ContainerStarted","Data":"95b5ad6aaa39f97386c3098064919e0a0595d612565e02bb7581a88c731f942b"} Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.146945 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" event={"ID":"c89910f5-6c21-4f91-a07f-5b17503b3882","Type":"ContainerStarted","Data":"89cca5c6ebce10d697de59e4c611b7a6f0cd416a09fdfa70781bae096abc2e11"} Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.147518 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" event={"ID":"c89910f5-6c21-4f91-a07f-5b17503b3882","Type":"ContainerStarted","Data":"0d46f16234300d31c5f48c636fa572e5c9429e11f2398e372a42d9400c4cb6f6"} Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.148893 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.148921 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.175465 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" podStartSLOduration=2.175437498 podStartE2EDuration="2.175437498s" podCreationTimestamp="2026-02-24 15:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:27.170776102 +0000 UTC m=+1528.789834615" watchObservedRunningTime="2026-02-24 15:14:27.175437498 +0000 UTC m=+1528.794495991" Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.225948 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.226327 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="ceilometer-central-agent" containerID="cri-o://ce385b767124189a353ece95a98b8ab31d0acbf4314723d5b59b4e3194e0b165" gracePeriod=30 Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.226396 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="proxy-httpd" containerID="cri-o://b8b9eade38dc78fab0c669ba24f8f57dc9b6d72839a876e104ba98b2acec23f4" gracePeriod=30 Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.226472 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="sg-core" containerID="cri-o://37c0f5cbeca2aa684cd5488275807e6f6be2aef5bec20148e3b035f05251fedf" gracePeriod=30 Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.226547 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="ceilometer-notification-agent" containerID="cri-o://8b8921a15d957d8c67441edf16deca3f2f24f204e4f435a5eb4d3b9c326fb614" gracePeriod=30 Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.235535 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.207:3000/\": EOF" Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.367739 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.438927 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-w4r8q"] Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.439181 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" podUID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" containerName="dnsmasq-dns" containerID="cri-o://cbd99ad069a2ae3b7fd17f3ad5d50d2455211028049fe8c3bf269906392789bd" gracePeriod=10 Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.698325 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 24 15:14:27 crc kubenswrapper[4982]: I0224 15:14:27.837879 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.249394 4982 generic.go:334] "Generic (PLEG): container finished" podID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" containerID="cbd99ad069a2ae3b7fd17f3ad5d50d2455211028049fe8c3bf269906392789bd" exitCode=0 Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.249874 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" event={"ID":"292f4788-1eb7-4407-9bb5-3fa8ca1d5702","Type":"ContainerDied","Data":"cbd99ad069a2ae3b7fd17f3ad5d50d2455211028049fe8c3bf269906392789bd"} Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.285947 4982 generic.go:334] "Generic (PLEG): container finished" podID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerID="b8b9eade38dc78fab0c669ba24f8f57dc9b6d72839a876e104ba98b2acec23f4" exitCode=0 Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.285978 4982 generic.go:334] "Generic (PLEG): container finished" podID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerID="37c0f5cbeca2aa684cd5488275807e6f6be2aef5bec20148e3b035f05251fedf" exitCode=2 Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.285988 4982 generic.go:334] "Generic (PLEG): container finished" podID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerID="8b8921a15d957d8c67441edf16deca3f2f24f204e4f435a5eb4d3b9c326fb614" exitCode=0 Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.285995 4982 generic.go:334] "Generic (PLEG): container finished" podID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerID="ce385b767124189a353ece95a98b8ab31d0acbf4314723d5b59b4e3194e0b165" exitCode=0 Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.287050 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerDied","Data":"b8b9eade38dc78fab0c669ba24f8f57dc9b6d72839a876e104ba98b2acec23f4"} Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.287100 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerDied","Data":"37c0f5cbeca2aa684cd5488275807e6f6be2aef5bec20148e3b035f05251fedf"} Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.287112 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerDied","Data":"8b8921a15d957d8c67441edf16deca3f2f24f204e4f435a5eb4d3b9c326fb614"} Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.287122 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerDied","Data":"ce385b767124189a353ece95a98b8ab31d0acbf4314723d5b59b4e3194e0b165"} Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.287258 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerName="cinder-scheduler" containerID="cri-o://0ddd0fa30d31a5ee8f00ee221550cb7d465a0c78fb0b81e27eb57047a390b583" gracePeriod=30 Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.287709 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerName="probe" containerID="cri-o://6edac1320a3854fa2a5629c14c686f18e776ae387ac8068bf88ec54c5e861229" gracePeriod=30 Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.452069 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.631978 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-swift-storage-0\") pod \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.632031 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-config\") pod \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.632112 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-nb\") pod \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.632199 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-svc\") pod \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.632248 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnn5b\" (UniqueName: \"kubernetes.io/projected/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-kube-api-access-gnn5b\") pod \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.632268 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-sb\") pod \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\" (UID: \"292f4788-1eb7-4407-9bb5-3fa8ca1d5702\") " Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.648714 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-kube-api-access-gnn5b" (OuterVolumeSpecName: "kube-api-access-gnn5b") pod "292f4788-1eb7-4407-9bb5-3fa8ca1d5702" (UID: "292f4788-1eb7-4407-9bb5-3fa8ca1d5702"). InnerVolumeSpecName "kube-api-access-gnn5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.734813 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnn5b\" (UniqueName: \"kubernetes.io/projected/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-kube-api-access-gnn5b\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.755766 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "292f4788-1eb7-4407-9bb5-3fa8ca1d5702" (UID: "292f4788-1eb7-4407-9bb5-3fa8ca1d5702"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.822686 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "292f4788-1eb7-4407-9bb5-3fa8ca1d5702" (UID: "292f4788-1eb7-4407-9bb5-3fa8ca1d5702"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.834869 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-config" (OuterVolumeSpecName: "config") pod "292f4788-1eb7-4407-9bb5-3fa8ca1d5702" (UID: "292f4788-1eb7-4407-9bb5-3fa8ca1d5702"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.836484 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.836520 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.836529 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.903534 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "292f4788-1eb7-4407-9bb5-3fa8ca1d5702" (UID: "292f4788-1eb7-4407-9bb5-3fa8ca1d5702"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.921469 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "292f4788-1eb7-4407-9bb5-3fa8ca1d5702" (UID: "292f4788-1eb7-4407-9bb5-3fa8ca1d5702"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.955965 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.956216 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/292f4788-1eb7-4407-9bb5-3fa8ca1d5702-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:28 crc kubenswrapper[4982]: I0224 15:14:28.981542 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.058609 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-run-httpd\") pod \"fdbad9e7-6972-4ede-a463-6c390f50202f\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.058696 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-combined-ca-bundle\") pod \"fdbad9e7-6972-4ede-a463-6c390f50202f\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.058817 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk64n\" (UniqueName: \"kubernetes.io/projected/fdbad9e7-6972-4ede-a463-6c390f50202f-kube-api-access-bk64n\") pod \"fdbad9e7-6972-4ede-a463-6c390f50202f\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.058901 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-log-httpd\") pod \"fdbad9e7-6972-4ede-a463-6c390f50202f\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.058963 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-config-data\") pod \"fdbad9e7-6972-4ede-a463-6c390f50202f\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.059047 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-sg-core-conf-yaml\") pod \"fdbad9e7-6972-4ede-a463-6c390f50202f\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.059125 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-scripts\") pod \"fdbad9e7-6972-4ede-a463-6c390f50202f\" (UID: \"fdbad9e7-6972-4ede-a463-6c390f50202f\") " Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.059267 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fdbad9e7-6972-4ede-a463-6c390f50202f" (UID: "fdbad9e7-6972-4ede-a463-6c390f50202f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.059662 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.059997 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fdbad9e7-6972-4ede-a463-6c390f50202f" (UID: "fdbad9e7-6972-4ede-a463-6c390f50202f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.077740 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-scripts" (OuterVolumeSpecName: "scripts") pod "fdbad9e7-6972-4ede-a463-6c390f50202f" (UID: "fdbad9e7-6972-4ede-a463-6c390f50202f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.081753 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdbad9e7-6972-4ede-a463-6c390f50202f-kube-api-access-bk64n" (OuterVolumeSpecName: "kube-api-access-bk64n") pod "fdbad9e7-6972-4ede-a463-6c390f50202f" (UID: "fdbad9e7-6972-4ede-a463-6c390f50202f"). InnerVolumeSpecName "kube-api-access-bk64n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.147616 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fdbad9e7-6972-4ede-a463-6c390f50202f" (UID: "fdbad9e7-6972-4ede-a463-6c390f50202f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.161118 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.161157 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk64n\" (UniqueName: \"kubernetes.io/projected/fdbad9e7-6972-4ede-a463-6c390f50202f-kube-api-access-bk64n\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.161173 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fdbad9e7-6972-4ede-a463-6c390f50202f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.161187 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.195830 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdbad9e7-6972-4ede-a463-6c390f50202f" (UID: "fdbad9e7-6972-4ede-a463-6c390f50202f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.241969 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-config-data" (OuterVolumeSpecName: "config-data") pod "fdbad9e7-6972-4ede-a463-6c390f50202f" (UID: "fdbad9e7-6972-4ede-a463-6c390f50202f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.264047 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.264083 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbad9e7-6972-4ede-a463-6c390f50202f-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.300791 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fdbad9e7-6972-4ede-a463-6c390f50202f","Type":"ContainerDied","Data":"dd9fd9d3d086fca37c0c38f447e9eff337ab43673666d74e6fbcd4087b35262e"} Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.300825 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.300862 4982 scope.go:117] "RemoveContainer" containerID="b8b9eade38dc78fab0c669ba24f8f57dc9b6d72839a876e104ba98b2acec23f4" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.309327 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerID="6edac1320a3854fa2a5629c14c686f18e776ae387ac8068bf88ec54c5e861229" exitCode=0 Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.309407 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e","Type":"ContainerDied","Data":"6edac1320a3854fa2a5629c14c686f18e776ae387ac8068bf88ec54c5e861229"} Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.315554 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.315989 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-w4r8q" event={"ID":"292f4788-1eb7-4407-9bb5-3fa8ca1d5702","Type":"ContainerDied","Data":"a9d12751e16015b382d3cb81ee59447281aaeab02b435e897eaa2d7b95f1e8c2"} Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.340244 4982 scope.go:117] "RemoveContainer" containerID="37c0f5cbeca2aa684cd5488275807e6f6be2aef5bec20148e3b035f05251fedf" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.363413 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.377199 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.386441 4982 scope.go:117] "RemoveContainer" containerID="8b8921a15d957d8c67441edf16deca3f2f24f204e4f435a5eb4d3b9c326fb614" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.387352 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-w4r8q"] Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.400454 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-w4r8q"] Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.409713 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:29 crc kubenswrapper[4982]: E0224 15:14:29.410296 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="ceilometer-notification-agent" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410319 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="ceilometer-notification-agent" Feb 24 15:14:29 crc kubenswrapper[4982]: E0224 15:14:29.410343 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="ceilometer-central-agent" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410352 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="ceilometer-central-agent" Feb 24 15:14:29 crc kubenswrapper[4982]: E0224 15:14:29.410369 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" containerName="dnsmasq-dns" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410377 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" containerName="dnsmasq-dns" Feb 24 15:14:29 crc kubenswrapper[4982]: E0224 15:14:29.410403 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" containerName="init" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410411 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" containerName="init" Feb 24 15:14:29 crc kubenswrapper[4982]: E0224 15:14:29.410419 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="proxy-httpd" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410427 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="proxy-httpd" Feb 24 15:14:29 crc kubenswrapper[4982]: E0224 15:14:29.410448 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="sg-core" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410458 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="sg-core" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410736 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="sg-core" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410762 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="proxy-httpd" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410773 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" containerName="dnsmasq-dns" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410791 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="ceilometer-notification-agent" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.410811 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" containerName="ceilometer-central-agent" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.413149 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.415821 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.419869 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.420606 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.422648 4982 scope.go:117] "RemoveContainer" containerID="ce385b767124189a353ece95a98b8ab31d0acbf4314723d5b59b4e3194e0b165" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.463630 4982 scope.go:117] "RemoveContainer" containerID="cbd99ad069a2ae3b7fd17f3ad5d50d2455211028049fe8c3bf269906392789bd" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.490335 4982 scope.go:117] "RemoveContainer" containerID="ede7f849e484ea214e080e1331ef5f6284fb9e4982522d86ca8c23337d86157a" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.577573 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-log-httpd\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.577634 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-scripts\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.577685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.577719 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-config-data\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.579278 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-run-httpd\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.579314 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.579341 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4hvj\" (UniqueName: \"kubernetes.io/projected/197eb701-3dd5-4042-bad2-0d007a7a651a-kube-api-access-x4hvj\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.681212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-log-httpd\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.681298 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-scripts\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.681373 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.681426 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-config-data\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.681535 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-run-httpd\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.681556 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4hvj\" (UniqueName: \"kubernetes.io/projected/197eb701-3dd5-4042-bad2-0d007a7a651a-kube-api-access-x4hvj\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.681578 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.689083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-log-httpd\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.689205 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-config-data\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.690040 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-run-httpd\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.695640 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.701112 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.701851 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-scripts\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.722215 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4hvj\" (UniqueName: \"kubernetes.io/projected/197eb701-3dd5-4042-bad2-0d007a7a651a-kube-api-access-x4hvj\") pod \"ceilometer-0\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " pod="openstack/ceilometer-0" Feb 24 15:14:29 crc kubenswrapper[4982]: I0224 15:14:29.748537 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:30 crc kubenswrapper[4982]: I0224 15:14:30.357117 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.214600 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292f4788-1eb7-4407-9bb5-3fa8ca1d5702" path="/var/lib/kubelet/pods/292f4788-1eb7-4407-9bb5-3fa8ca1d5702/volumes" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.252628 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdbad9e7-6972-4ede-a463-6c390f50202f" path="/var/lib/kubelet/pods/fdbad9e7-6972-4ede-a463-6c390f50202f/volumes" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.365736 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerID="0ddd0fa30d31a5ee8f00ee221550cb7d465a0c78fb0b81e27eb57047a390b583" exitCode=0 Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.365815 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e","Type":"ContainerDied","Data":"0ddd0fa30d31a5ee8f00ee221550cb7d465a0c78fb0b81e27eb57047a390b583"} Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.365843 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e","Type":"ContainerDied","Data":"08fdf2a0435d47c706ce0ccc3fb3ec560839e92dead2000ba7a5b8c9754fa139"} Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.365855 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08fdf2a0435d47c706ce0ccc3fb3ec560839e92dead2000ba7a5b8c9754fa139" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.367711 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerStarted","Data":"14f56f0cafdcc53348364cd4d935bd317c1089ffc5c226dfb1034fc7b88072fc"} Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.376960 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.440357 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.449785 4982 scope.go:117] "RemoveContainer" containerID="4bf39451b8e665d55680a2330bebe2181801a7087f1de256d131d30863c01f6a" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.516148 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data-custom\") pod \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.516430 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data\") pod \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.516531 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-combined-ca-bundle\") pod \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.516578 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-scripts\") pod \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.516648 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqrg6\" (UniqueName: \"kubernetes.io/projected/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-kube-api-access-jqrg6\") pod \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.516691 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-etc-machine-id\") pod \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\" (UID: \"5f8b3468-f9a0-474f-885f-c6c2e0f97b7e\") " Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.517514 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" (UID: "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.523760 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-kube-api-access-jqrg6" (OuterVolumeSpecName: "kube-api-access-jqrg6") pod "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" (UID: "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e"). InnerVolumeSpecName "kube-api-access-jqrg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.525239 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-scripts" (OuterVolumeSpecName: "scripts") pod "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" (UID: "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.525657 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" (UID: "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.587892 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" (UID: "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.619981 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqrg6\" (UniqueName: \"kubernetes.io/projected/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-kube-api-access-jqrg6\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.620014 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.620023 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.620031 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.620039 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.680732 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data" (OuterVolumeSpecName: "config-data") pod "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" (UID: "5f8b3468-f9a0-474f-885f-c6c2e0f97b7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:31 crc kubenswrapper[4982]: I0224 15:14:31.722393 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.380352 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerStarted","Data":"bc264d94a6d530fc330724539b141c9bdec1a9060bdadfda6c05e2d34924877c"} Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.382264 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.434389 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.460329 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.473979 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:32 crc kubenswrapper[4982]: E0224 15:14:32.474622 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerName="cinder-scheduler" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.474649 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerName="cinder-scheduler" Feb 24 15:14:32 crc kubenswrapper[4982]: E0224 15:14:32.474688 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerName="probe" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.474698 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerName="probe" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.474998 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerName="probe" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.475032 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" containerName="cinder-scheduler" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.476540 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.479904 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.487955 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.538014 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-config-data\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.538324 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjtkq\" (UniqueName: \"kubernetes.io/projected/9e704a92-af74-4bf8-bf2f-3d684b08a722-kube-api-access-bjtkq\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.538388 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.538434 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.538723 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-scripts\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.538750 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e704a92-af74-4bf8-bf2f-3d684b08a722-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.641247 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-scripts\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.641291 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e704a92-af74-4bf8-bf2f-3d684b08a722-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.641315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-config-data\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.641419 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjtkq\" (UniqueName: \"kubernetes.io/projected/9e704a92-af74-4bf8-bf2f-3d684b08a722-kube-api-access-bjtkq\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.641424 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e704a92-af74-4bf8-bf2f-3d684b08a722-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.641446 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.641588 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.648066 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.650887 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-scripts\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.653344 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.654880 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e704a92-af74-4bf8-bf2f-3d684b08a722-config-data\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.660001 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjtkq\" (UniqueName: \"kubernetes.io/projected/9e704a92-af74-4bf8-bf2f-3d684b08a722-kube-api-access-bjtkq\") pod \"cinder-scheduler-0\" (UID: \"9e704a92-af74-4bf8-bf2f-3d684b08a722\") " pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.818998 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.830744 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7964bbc76-m2h7r"] Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.833669 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.836913 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.837075 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-spq7w" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.843637 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.875570 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7964bbc76-m2h7r"] Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.936772 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-75b5d95788-lvg8w"] Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.938465 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.945490 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.965005 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.965335 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgs87\" (UniqueName: \"kubernetes.io/projected/a92127b8-25b0-4d2a-874d-91cf1acfdc79-kube-api-access-wgs87\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.965549 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data-custom\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.965773 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-combined-ca-bundle\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.995554 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sj6rr"] Feb 24 15:14:32 crc kubenswrapper[4982]: I0224 15:14:32.997416 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.030574 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75b5d95788-lvg8w"] Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.057239 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74d5f4785d-rlf88"] Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.059917 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.064219 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.067902 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data-custom\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.067949 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svh6\" (UniqueName: \"kubernetes.io/projected/57a3d588-5808-4049-8f23-d7eb8ac84839-kube-api-access-8svh6\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.067996 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068027 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068072 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmv2\" (UniqueName: \"kubernetes.io/projected/6836c061-e1c3-4897-824e-175a86614fad-kube-api-access-tjmv2\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068103 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-combined-ca-bundle\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068145 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgs87\" (UniqueName: \"kubernetes.io/projected/a92127b8-25b0-4d2a-874d-91cf1acfdc79-kube-api-access-wgs87\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068164 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-config\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068204 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068224 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data-custom\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068256 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068275 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-combined-ca-bundle\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068298 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.068336 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.086744 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.087576 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgs87\" (UniqueName: \"kubernetes.io/projected/a92127b8-25b0-4d2a-874d-91cf1acfdc79-kube-api-access-wgs87\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.097579 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data-custom\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.100220 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-combined-ca-bundle\") pod \"heat-engine-7964bbc76-m2h7r\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.111997 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sj6rr"] Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.130790 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74d5f4785d-rlf88"] Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.165031 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8b3468-f9a0-474f-885f-c6c2e0f97b7e" path="/var/lib/kubelet/pods/5f8b3468-f9a0-474f-885f-c6c2e0f97b7e/volumes" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.178668 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data-custom\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.178747 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svh6\" (UniqueName: \"kubernetes.io/projected/57a3d588-5808-4049-8f23-d7eb8ac84839-kube-api-access-8svh6\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.178833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.178940 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjmv2\" (UniqueName: \"kubernetes.io/projected/6836c061-e1c3-4897-824e-175a86614fad-kube-api-access-tjmv2\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.178993 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-combined-ca-bundle\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179027 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bzwk\" (UniqueName: \"kubernetes.io/projected/5deaccac-c61e-42c6-8628-2dc559076fa5-kube-api-access-5bzwk\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179096 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-config\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179174 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179212 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data-custom\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179242 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179284 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179335 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-combined-ca-bundle\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179376 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.179405 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.183620 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data-custom\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.184363 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.187611 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.187845 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.188252 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.211170 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.212896 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-config\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.221671 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjmv2\" (UniqueName: \"kubernetes.io/projected/6836c061-e1c3-4897-824e-175a86614fad-kube-api-access-tjmv2\") pod \"dnsmasq-dns-688b9f5b49-sj6rr\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.225619 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svh6\" (UniqueName: \"kubernetes.io/projected/57a3d588-5808-4049-8f23-d7eb8ac84839-kube-api-access-8svh6\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.233398 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-combined-ca-bundle\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.235689 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data\") pod \"heat-api-75b5d95788-lvg8w\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.288314 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data-custom\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.289235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-combined-ca-bundle\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.289346 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.290229 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bzwk\" (UniqueName: \"kubernetes.io/projected/5deaccac-c61e-42c6-8628-2dc559076fa5-kube-api-access-5bzwk\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.298544 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.303062 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.318971 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data-custom\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.319928 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-combined-ca-bundle\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.331197 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bzwk\" (UniqueName: \"kubernetes.io/projected/5deaccac-c61e-42c6-8628-2dc559076fa5-kube-api-access-5bzwk\") pod \"heat-cfnapi-74d5f4785d-rlf88\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.344624 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:33 crc kubenswrapper[4982]: I0224 15:14:33.594031 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:35 crc kubenswrapper[4982]: I0224 15:14:35.459398 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:35 crc kubenswrapper[4982]: I0224 15:14:35.465106 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-68cc98f8f7-kjwtc" Feb 24 15:14:37 crc kubenswrapper[4982]: I0224 15:14:37.745728 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.210:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:14:38 crc kubenswrapper[4982]: I0224 15:14:38.467495 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:38 crc kubenswrapper[4982]: I0224 15:14:38.560358 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:14:38 crc kubenswrapper[4982]: I0224 15:14:38.738320 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:14:38 crc kubenswrapper[4982]: I0224 15:14:38.738670 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:14:38 crc kubenswrapper[4982]: I0224 15:14:38.738731 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:14:38 crc kubenswrapper[4982]: I0224 15:14:38.740357 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f34013f8efc594f596b2fce7ffa43397c7edb98c85b96c1bc1ccf7f7c29f143"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:14:38 crc kubenswrapper[4982]: I0224 15:14:38.740427 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://6f34013f8efc594f596b2fce7ffa43397c7edb98c85b96c1bc1ccf7f7c29f143" gracePeriod=600 Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.277869 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6f466c7dbf-ltqbw"] Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.280304 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.311745 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-c5b67c676-tlqrh"] Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.321921 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.339451 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6d67c96fcc-6mgct"] Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.343355 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.364147 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f466c7dbf-ltqbw"] Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.384211 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c5b67c676-tlqrh"] Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.420969 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d67c96fcc-6mgct"] Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.478476 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.478582 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data-custom\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.478750 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.478809 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data-custom\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.478852 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbmh\" (UniqueName: \"kubernetes.io/projected/b855f1d1-f4f5-412f-a649-6adbdb13130d-kube-api-access-qxbmh\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.478880 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-combined-ca-bundle\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.478905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.478944 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-combined-ca-bundle\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.479178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzbd\" (UniqueName: \"kubernetes.io/projected/ae38b650-d0e6-4fad-9117-c3b6e4d07825-kube-api-access-fkzbd\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.479209 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxpk\" (UniqueName: \"kubernetes.io/projected/4a4b4caf-4419-4612-b1a9-72250327c2f3-kube-api-access-mwxpk\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.479242 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data-custom\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.479272 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-combined-ca-bundle\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.512238 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="6f34013f8efc594f596b2fce7ffa43397c7edb98c85b96c1bc1ccf7f7c29f143" exitCode=0 Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.512284 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"6f34013f8efc594f596b2fce7ffa43397c7edb98c85b96c1bc1ccf7f7c29f143"} Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.512317 4982 scope.go:117] "RemoveContainer" containerID="5be665899696d5c8fd21b8f8f600a79f59d38e14863a16150fbf781e7134602b" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581343 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzbd\" (UniqueName: \"kubernetes.io/projected/ae38b650-d0e6-4fad-9117-c3b6e4d07825-kube-api-access-fkzbd\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581394 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxpk\" (UniqueName: \"kubernetes.io/projected/4a4b4caf-4419-4612-b1a9-72250327c2f3-kube-api-access-mwxpk\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581431 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data-custom\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581455 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-combined-ca-bundle\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581571 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581599 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data-custom\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581667 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581702 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data-custom\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581725 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbmh\" (UniqueName: \"kubernetes.io/projected/b855f1d1-f4f5-412f-a649-6adbdb13130d-kube-api-access-qxbmh\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581764 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-combined-ca-bundle\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581785 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.581814 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-combined-ca-bundle\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.590031 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.591092 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data-custom\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.596371 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-combined-ca-bundle\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.599436 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-combined-ca-bundle\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.600325 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.601084 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.610896 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data-custom\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.611436 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-combined-ca-bundle\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.616020 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbmh\" (UniqueName: \"kubernetes.io/projected/b855f1d1-f4f5-412f-a649-6adbdb13130d-kube-api-access-qxbmh\") pod \"heat-engine-6f466c7dbf-ltqbw\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.617810 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxpk\" (UniqueName: \"kubernetes.io/projected/4a4b4caf-4419-4612-b1a9-72250327c2f3-kube-api-access-mwxpk\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.619183 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzbd\" (UniqueName: \"kubernetes.io/projected/ae38b650-d0e6-4fad-9117-c3b6e4d07825-kube-api-access-fkzbd\") pod \"heat-cfnapi-6d67c96fcc-6mgct\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.630833 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data-custom\") pod \"heat-api-c5b67c676-tlqrh\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.655926 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.716344 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:39 crc kubenswrapper[4982]: I0224 15:14:39.908416 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:41 crc kubenswrapper[4982]: I0224 15:14:41.770621 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74d5f4785d-rlf88"] Feb 24 15:14:41 crc kubenswrapper[4982]: I0224 15:14:41.886392 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-c5b67c676-tlqrh"] Feb 24 15:14:41 crc kubenswrapper[4982]: I0224 15:14:41.948391 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 24 15:14:41 crc kubenswrapper[4982]: W0224 15:14:41.958804 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e704a92_af74_4bf8_bf2f_3d684b08a722.slice/crio-feeda314ed4b5d829035cbd002b124be2cb12528f20b93663ee4d016ec13f40d WatchSource:0}: Error finding container feeda314ed4b5d829035cbd002b124be2cb12528f20b93663ee4d016ec13f40d: Status 404 returned error can't find the container with id feeda314ed4b5d829035cbd002b124be2cb12528f20b93663ee4d016ec13f40d Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.112017 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6f466c7dbf-ltqbw"] Feb 24 15:14:42 crc kubenswrapper[4982]: W0224 15:14:42.115654 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb855f1d1_f4f5_412f_a649_6adbdb13130d.slice/crio-25c2aa7d3c68e15b7a7d0831ae272edf467dfad5468da2cf61706abea7cfba15 WatchSource:0}: Error finding container 25c2aa7d3c68e15b7a7d0831ae272edf467dfad5468da2cf61706abea7cfba15: Status 404 returned error can't find the container with id 25c2aa7d3c68e15b7a7d0831ae272edf467dfad5468da2cf61706abea7cfba15 Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.122626 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6d67c96fcc-6mgct"] Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.397280 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7964bbc76-m2h7r"] Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.413263 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sj6rr"] Feb 24 15:14:42 crc kubenswrapper[4982]: W0224 15:14:42.419764 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424 WatchSource:0}: Error finding container 038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424: Status 404 returned error can't find the container with id 038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424 Feb 24 15:14:42 crc kubenswrapper[4982]: W0224 15:14:42.420022 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6836c061_e1c3_4897_824e_175a86614fad.slice/crio-5b48d0ce4ad4fb023763cc4409fb598b9ed39b463bead7287015d0353ee4fc7c WatchSource:0}: Error finding container 5b48d0ce4ad4fb023763cc4409fb598b9ed39b463bead7287015d0353ee4fc7c: Status 404 returned error can't find the container with id 5b48d0ce4ad4fb023763cc4409fb598b9ed39b463bead7287015d0353ee4fc7c Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.423399 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75b5d95788-lvg8w"] Feb 24 15:14:42 crc kubenswrapper[4982]: W0224 15:14:42.431990 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a3d588_5808_4049_8f23_d7eb8ac84839.slice/crio-7d409bc52479c0cd9b0b20de62e1a25bc4f053fe5bfcab02ce419b38b86ea36e WatchSource:0}: Error finding container 7d409bc52479c0cd9b0b20de62e1a25bc4f053fe5bfcab02ce419b38b86ea36e: Status 404 returned error can't find the container with id 7d409bc52479c0cd9b0b20de62e1a25bc4f053fe5bfcab02ce419b38b86ea36e Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.626155 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" event={"ID":"6836c061-e1c3-4897-824e-175a86614fad","Type":"ContainerStarted","Data":"5b48d0ce4ad4fb023763cc4409fb598b9ed39b463bead7287015d0353ee4fc7c"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.670459 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9e704a92-af74-4bf8-bf2f-3d684b08a722","Type":"ContainerStarted","Data":"feeda314ed4b5d829035cbd002b124be2cb12528f20b93663ee4d016ec13f40d"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.692002 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"546fb62a-ffa6-4067-b267-ea1ff18ee76e","Type":"ContainerStarted","Data":"43146084913d055f8e038ca2367f2769f748b523fd369ea09237de70b9d979cb"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.699427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f466c7dbf-ltqbw" event={"ID":"b855f1d1-f4f5-412f-a649-6adbdb13130d","Type":"ContainerStarted","Data":"6f2ef479b7da24e3ac65c2caaedc0a0d3846feb6075ae4371c8a70a91a641664"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.699474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f466c7dbf-ltqbw" event={"ID":"b855f1d1-f4f5-412f-a649-6adbdb13130d","Type":"ContainerStarted","Data":"25c2aa7d3c68e15b7a7d0831ae272edf467dfad5468da2cf61706abea7cfba15"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.701557 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.712350 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75b5d95788-lvg8w" event={"ID":"57a3d588-5808-4049-8f23-d7eb8ac84839","Type":"ContainerStarted","Data":"7d409bc52479c0cd9b0b20de62e1a25bc4f053fe5bfcab02ce419b38b86ea36e"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.719992 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.727099 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" event={"ID":"5deaccac-c61e-42c6-8628-2dc559076fa5","Type":"ContainerStarted","Data":"cda93b5e01cdbe55147b9c66035710b4c7414fc261e77d0a4eaad5888723b15b"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.730374 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7964bbc76-m2h7r" event={"ID":"a92127b8-25b0-4d2a-874d-91cf1acfdc79","Type":"ContainerStarted","Data":"038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.771323 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c5b67c676-tlqrh" event={"ID":"4a4b4caf-4419-4612-b1a9-72250327c2f3","Type":"ContainerStarted","Data":"6f9e2455654409571865f8c55e9447fca2d1e608e1ac2e185221f76ceb686893"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.773356 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerStarted","Data":"a5ca219c862f87aadeb6ad7f178ed1d1749a5b1caa15c30b4fe997519c0fe18c"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.812920 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" event={"ID":"ae38b650-d0e6-4fad-9117-c3b6e4d07825","Type":"ContainerStarted","Data":"692f46c1785cd30a93ebf7a794d7ef7ffe6d3fe534cceb6459a6b146466e4368"} Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.836653 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-75b5d95788-lvg8w"] Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.850692 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74d5f4785d-rlf88"] Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.876951 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-ffddd7b8-rg7fc"] Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.878489 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.882438 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.882482 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.899625 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6864b648bd-6hrlz"] Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.901824 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.908173 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.908600 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.924019 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-ffddd7b8-rg7fc"] Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.943417 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6864b648bd-6hrlz"] Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.949613 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.809983539 podStartE2EDuration="20.949594158s" podCreationTimestamp="2026-02-24 15:14:22 +0000 UTC" firstStartedPulling="2026-02-24 15:14:23.573675223 +0000 UTC m=+1525.192733716" lastFinishedPulling="2026-02-24 15:14:40.713285842 +0000 UTC m=+1542.332344335" observedRunningTime="2026-02-24 15:14:42.724488597 +0000 UTC m=+1544.343547110" watchObservedRunningTime="2026-02-24 15:14:42.949594158 +0000 UTC m=+1544.568652651" Feb 24 15:14:42 crc kubenswrapper[4982]: I0224 15:14:42.991379 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6f466c7dbf-ltqbw" podStartSLOduration=3.991361176 podStartE2EDuration="3.991361176s" podCreationTimestamp="2026-02-24 15:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:42.79031482 +0000 UTC m=+1544.409373313" watchObservedRunningTime="2026-02-24 15:14:42.991361176 +0000 UTC m=+1544.610419669" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.026795 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-combined-ca-bundle\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.026838 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-internal-tls-certs\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.026886 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-internal-tls-certs\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.026935 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-public-tls-certs\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.030770 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data-custom\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.030923 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data-custom\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.031240 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.031283 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-combined-ca-bundle\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.031330 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7v6n\" (UniqueName: \"kubernetes.io/projected/b6da668a-ee13-4cb8-9a49-1efd4f88237e-kube-api-access-w7v6n\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.031385 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-public-tls-certs\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.031420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.031441 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hcr5\" (UniqueName: \"kubernetes.io/projected/2ee35452-c5a3-489a-8b5a-c2310d6547c1-kube-api-access-5hcr5\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.133888 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-combined-ca-bundle\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134211 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-internal-tls-certs\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134255 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-internal-tls-certs\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-public-tls-certs\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134316 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data-custom\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134350 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data-custom\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134470 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134489 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-combined-ca-bundle\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134529 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7v6n\" (UniqueName: \"kubernetes.io/projected/b6da668a-ee13-4cb8-9a49-1efd4f88237e-kube-api-access-w7v6n\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134558 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-public-tls-certs\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134578 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.134593 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hcr5\" (UniqueName: \"kubernetes.io/projected/2ee35452-c5a3-489a-8b5a-c2310d6547c1-kube-api-access-5hcr5\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.151934 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-combined-ca-bundle\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.159314 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hcr5\" (UniqueName: \"kubernetes.io/projected/2ee35452-c5a3-489a-8b5a-c2310d6547c1-kube-api-access-5hcr5\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.159410 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-internal-tls-certs\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.159688 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.159807 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-combined-ca-bundle\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.160069 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data-custom\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.160245 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data-custom\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.160936 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-public-tls-certs\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.162117 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.162296 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-public-tls-certs\") pod \"heat-api-6864b648bd-6hrlz\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.162802 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-internal-tls-certs\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.187724 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7v6n\" (UniqueName: \"kubernetes.io/projected/b6da668a-ee13-4cb8-9a49-1efd4f88237e-kube-api-access-w7v6n\") pod \"heat-cfnapi-ffddd7b8-rg7fc\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.360267 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.367171 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.865944 4982 generic.go:334] "Generic (PLEG): container finished" podID="6836c061-e1c3-4897-824e-175a86614fad" containerID="a758475c96814da78dcd8ee5cf848787db8f6859f1fe63d0e6e109b1f815996c" exitCode=0 Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.866031 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" event={"ID":"6836c061-e1c3-4897-824e-175a86614fad","Type":"ContainerDied","Data":"a758475c96814da78dcd8ee5cf848787db8f6859f1fe63d0e6e109b1f815996c"} Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.868281 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9e704a92-af74-4bf8-bf2f-3d684b08a722","Type":"ContainerStarted","Data":"625a2cff08c3767da63367e68439615efd8123b73159167db896ad482aaf54b6"} Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.882872 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7964bbc76-m2h7r" event={"ID":"a92127b8-25b0-4d2a-874d-91cf1acfdc79","Type":"ContainerStarted","Data":"b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae"} Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.883416 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.950045 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7964bbc76-m2h7r" podStartSLOduration=11.950025335 podStartE2EDuration="11.950025335s" podCreationTimestamp="2026-02-24 15:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:43.940395753 +0000 UTC m=+1545.559454246" watchObservedRunningTime="2026-02-24 15:14:43.950025335 +0000 UTC m=+1545.569083828" Feb 24 15:14:43 crc kubenswrapper[4982]: I0224 15:14:43.961624 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerStarted","Data":"8353ece15dd897a0d901e5ff563b7083cd6e2b86e922d51e1c2a5f0475a4aed2"} Feb 24 15:14:44 crc kubenswrapper[4982]: I0224 15:14:44.121985 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6864b648bd-6hrlz"] Feb 24 15:14:44 crc kubenswrapper[4982]: I0224 15:14:44.251357 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-ffddd7b8-rg7fc"] Feb 24 15:14:44 crc kubenswrapper[4982]: I0224 15:14:44.984395 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" event={"ID":"6836c061-e1c3-4897-824e-175a86614fad","Type":"ContainerStarted","Data":"edfd0654105e29eb0825e12168991b1cce44e29e78dac9540577844d74df8c95"} Feb 24 15:14:44 crc kubenswrapper[4982]: I0224 15:14:44.985056 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:44 crc kubenswrapper[4982]: I0224 15:14:44.997811 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9e704a92-af74-4bf8-bf2f-3d684b08a722","Type":"ContainerStarted","Data":"b5908e2bb3b984f554f10943584f223cb6c0bd34e6aa125373cde257af6e3fe5"} Feb 24 15:14:45 crc kubenswrapper[4982]: I0224 15:14:45.011810 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6864b648bd-6hrlz" event={"ID":"2ee35452-c5a3-489a-8b5a-c2310d6547c1","Type":"ContainerStarted","Data":"8a94b6f1c8e0de05e79300757ed40364568797d1770fe03689717a455ed5d47d"} Feb 24 15:14:45 crc kubenswrapper[4982]: I0224 15:14:45.014773 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" event={"ID":"b6da668a-ee13-4cb8-9a49-1efd4f88237e","Type":"ContainerStarted","Data":"596aced197650c72d33f773184aa0e6a0383542e58864fac1a23fde822dddb0c"} Feb 24 15:14:45 crc kubenswrapper[4982]: I0224 15:14:45.017193 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" podStartSLOduration=13.017174429 podStartE2EDuration="13.017174429s" podCreationTimestamp="2026-02-24 15:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:45.011526456 +0000 UTC m=+1546.630584969" watchObservedRunningTime="2026-02-24 15:14:45.017174429 +0000 UTC m=+1546.636232922" Feb 24 15:14:45 crc kubenswrapper[4982]: I0224 15:14:45.045882 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=13.04586307 podStartE2EDuration="13.04586307s" podCreationTimestamp="2026-02-24 15:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:45.031057587 +0000 UTC m=+1546.650116080" watchObservedRunningTime="2026-02-24 15:14:45.04586307 +0000 UTC m=+1546.664921563" Feb 24 15:14:47 crc kubenswrapper[4982]: I0224 15:14:47.819796 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.072789 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75b5d95788-lvg8w" event={"ID":"57a3d588-5808-4049-8f23-d7eb8ac84839","Type":"ContainerStarted","Data":"8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5"} Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.073488 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.072893 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-75b5d95788-lvg8w" podUID="57a3d588-5808-4049-8f23-d7eb8ac84839" containerName="heat-api" containerID="cri-o://8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5" gracePeriod=60 Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.077756 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6864b648bd-6hrlz" event={"ID":"2ee35452-c5a3-489a-8b5a-c2310d6547c1","Type":"ContainerStarted","Data":"e28140a5776da40ceadd26a27415308dcf53780fa6e5bc6dc34055aed1e45744"} Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.081931 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" event={"ID":"5deaccac-c61e-42c6-8628-2dc559076fa5","Type":"ContainerStarted","Data":"ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a"} Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.081982 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" podUID="5deaccac-c61e-42c6-8628-2dc559076fa5" containerName="heat-cfnapi" containerID="cri-o://ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a" gracePeriod=60 Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.082076 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.095730 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-75b5d95788-lvg8w" podStartSLOduration=11.975707235 podStartE2EDuration="17.095714529s" podCreationTimestamp="2026-02-24 15:14:32 +0000 UTC" firstStartedPulling="2026-02-24 15:14:42.4433392 +0000 UTC m=+1544.062397703" lastFinishedPulling="2026-02-24 15:14:47.563346504 +0000 UTC m=+1549.182404997" observedRunningTime="2026-02-24 15:14:49.091659598 +0000 UTC m=+1550.710718101" watchObservedRunningTime="2026-02-24 15:14:49.095714529 +0000 UTC m=+1550.714773022" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.097304 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" event={"ID":"b6da668a-ee13-4cb8-9a49-1efd4f88237e","Type":"ContainerStarted","Data":"001b717024bc38c2123396fec6ba7af1872551207affaeeb61687cee3d073709"} Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.097399 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.099294 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c5b67c676-tlqrh" event={"ID":"4a4b4caf-4419-4612-b1a9-72250327c2f3","Type":"ContainerStarted","Data":"ca0a40681ff598ee1068be58e285889e72999c6b4a8dd898d2101c320e847c67"} Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.099486 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.103700 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerStarted","Data":"be09c86fb37803eed1ad5a9b5d1bae619d2ef2ea67103071f023ad58b2bc102e"} Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.103864 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="ceilometer-central-agent" containerID="cri-o://bc264d94a6d530fc330724539b141c9bdec1a9060bdadfda6c05e2d34924877c" gracePeriod=30 Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.104099 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.104142 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="proxy-httpd" containerID="cri-o://be09c86fb37803eed1ad5a9b5d1bae619d2ef2ea67103071f023ad58b2bc102e" gracePeriod=30 Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.104188 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="sg-core" containerID="cri-o://8353ece15dd897a0d901e5ff563b7083cd6e2b86e922d51e1c2a5f0475a4aed2" gracePeriod=30 Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.104231 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="ceilometer-notification-agent" containerID="cri-o://a5ca219c862f87aadeb6ad7f178ed1d1749a5b1caa15c30b4fe997519c0fe18c" gracePeriod=30 Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.106681 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" event={"ID":"ae38b650-d0e6-4fad-9117-c3b6e4d07825","Type":"ContainerStarted","Data":"54b8d6386aa23049e5a48a5170c171a6a70a1836c68e99d9b712359ce1ae133c"} Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.107410 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.119130 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" podStartSLOduration=11.034224754 podStartE2EDuration="17.119114446s" podCreationTimestamp="2026-02-24 15:14:32 +0000 UTC" firstStartedPulling="2026-02-24 15:14:41.780414126 +0000 UTC m=+1543.399472619" lastFinishedPulling="2026-02-24 15:14:47.865303828 +0000 UTC m=+1549.484362311" observedRunningTime="2026-02-24 15:14:49.111661593 +0000 UTC m=+1550.730720076" watchObservedRunningTime="2026-02-24 15:14:49.119114446 +0000 UTC m=+1550.738172939" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.131344 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" podStartSLOduration=4.746425781 podStartE2EDuration="10.131329359s" podCreationTimestamp="2026-02-24 15:14:39 +0000 UTC" firstStartedPulling="2026-02-24 15:14:42.132772383 +0000 UTC m=+1543.751830886" lastFinishedPulling="2026-02-24 15:14:47.517675971 +0000 UTC m=+1549.136734464" observedRunningTime="2026-02-24 15:14:49.129679794 +0000 UTC m=+1550.748738287" watchObservedRunningTime="2026-02-24 15:14:49.131329359 +0000 UTC m=+1550.750387852" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.157384 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.56470119 podStartE2EDuration="20.157364388s" podCreationTimestamp="2026-02-24 15:14:29 +0000 UTC" firstStartedPulling="2026-02-24 15:14:30.349886084 +0000 UTC m=+1531.968944577" lastFinishedPulling="2026-02-24 15:14:47.942549282 +0000 UTC m=+1549.561607775" observedRunningTime="2026-02-24 15:14:49.154381887 +0000 UTC m=+1550.773440380" watchObservedRunningTime="2026-02-24 15:14:49.157364388 +0000 UTC m=+1550.776422881" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.184383 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" podStartSLOduration=3.470110274 podStartE2EDuration="7.184366283s" podCreationTimestamp="2026-02-24 15:14:42 +0000 UTC" firstStartedPulling="2026-02-24 15:14:44.323156647 +0000 UTC m=+1545.942215130" lastFinishedPulling="2026-02-24 15:14:48.037412636 +0000 UTC m=+1549.656471139" observedRunningTime="2026-02-24 15:14:49.179055178 +0000 UTC m=+1550.798113671" watchObservedRunningTime="2026-02-24 15:14:49.184366283 +0000 UTC m=+1550.803424776" Feb 24 15:14:49 crc kubenswrapper[4982]: I0224 15:14:49.203560 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-c5b67c676-tlqrh" podStartSLOduration=4.3126493759999995 podStartE2EDuration="10.203484034s" podCreationTimestamp="2026-02-24 15:14:39 +0000 UTC" firstStartedPulling="2026-02-24 15:14:41.900176607 +0000 UTC m=+1543.519235100" lastFinishedPulling="2026-02-24 15:14:47.791011265 +0000 UTC m=+1549.410069758" observedRunningTime="2026-02-24 15:14:49.194684784 +0000 UTC m=+1550.813743277" watchObservedRunningTime="2026-02-24 15:14:49.203484034 +0000 UTC m=+1550.822542527" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.120872 4982 generic.go:334] "Generic (PLEG): container finished" podID="4a4b4caf-4419-4612-b1a9-72250327c2f3" containerID="ca0a40681ff598ee1068be58e285889e72999c6b4a8dd898d2101c320e847c67" exitCode=1 Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.121103 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c5b67c676-tlqrh" event={"ID":"4a4b4caf-4419-4612-b1a9-72250327c2f3","Type":"ContainerDied","Data":"ca0a40681ff598ee1068be58e285889e72999c6b4a8dd898d2101c320e847c67"} Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.122003 4982 scope.go:117] "RemoveContainer" containerID="ca0a40681ff598ee1068be58e285889e72999c6b4a8dd898d2101c320e847c67" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.133606 4982 generic.go:334] "Generic (PLEG): container finished" podID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerID="be09c86fb37803eed1ad5a9b5d1bae619d2ef2ea67103071f023ad58b2bc102e" exitCode=0 Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.133640 4982 generic.go:334] "Generic (PLEG): container finished" podID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerID="8353ece15dd897a0d901e5ff563b7083cd6e2b86e922d51e1c2a5f0475a4aed2" exitCode=2 Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.133648 4982 generic.go:334] "Generic (PLEG): container finished" podID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerID="a5ca219c862f87aadeb6ad7f178ed1d1749a5b1caa15c30b4fe997519c0fe18c" exitCode=0 Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.133659 4982 generic.go:334] "Generic (PLEG): container finished" podID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerID="bc264d94a6d530fc330724539b141c9bdec1a9060bdadfda6c05e2d34924877c" exitCode=0 Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.133707 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerDied","Data":"be09c86fb37803eed1ad5a9b5d1bae619d2ef2ea67103071f023ad58b2bc102e"} Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.133731 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerDied","Data":"8353ece15dd897a0d901e5ff563b7083cd6e2b86e922d51e1c2a5f0475a4aed2"} Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.133745 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerDied","Data":"a5ca219c862f87aadeb6ad7f178ed1d1749a5b1caa15c30b4fe997519c0fe18c"} Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.133756 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerDied","Data":"bc264d94a6d530fc330724539b141c9bdec1a9060bdadfda6c05e2d34924877c"} Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.149299 4982 generic.go:334] "Generic (PLEG): container finished" podID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" containerID="54b8d6386aa23049e5a48a5170c171a6a70a1836c68e99d9b712359ce1ae133c" exitCode=1 Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.149381 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" event={"ID":"ae38b650-d0e6-4fad-9117-c3b6e4d07825","Type":"ContainerDied","Data":"54b8d6386aa23049e5a48a5170c171a6a70a1836c68e99d9b712359ce1ae133c"} Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.160754 4982 scope.go:117] "RemoveContainer" containerID="54b8d6386aa23049e5a48a5170c171a6a70a1836c68e99d9b712359ce1ae133c" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.188264 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6864b648bd-6hrlz" podStartSLOduration=4.469184835 podStartE2EDuration="8.188243544s" podCreationTimestamp="2026-02-24 15:14:42 +0000 UTC" firstStartedPulling="2026-02-24 15:14:44.318386248 +0000 UTC m=+1545.937444741" lastFinishedPulling="2026-02-24 15:14:48.037444957 +0000 UTC m=+1549.656503450" observedRunningTime="2026-02-24 15:14:50.172641829 +0000 UTC m=+1551.791700322" watchObservedRunningTime="2026-02-24 15:14:50.188243544 +0000 UTC m=+1551.807302027" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.376296 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.439187 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-scripts\") pod \"197eb701-3dd5-4042-bad2-0d007a7a651a\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.439670 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-run-httpd\") pod \"197eb701-3dd5-4042-bad2-0d007a7a651a\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.439771 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-log-httpd\") pod \"197eb701-3dd5-4042-bad2-0d007a7a651a\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.439855 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-combined-ca-bundle\") pod \"197eb701-3dd5-4042-bad2-0d007a7a651a\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.439992 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-config-data\") pod \"197eb701-3dd5-4042-bad2-0d007a7a651a\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.440038 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4hvj\" (UniqueName: \"kubernetes.io/projected/197eb701-3dd5-4042-bad2-0d007a7a651a-kube-api-access-x4hvj\") pod \"197eb701-3dd5-4042-bad2-0d007a7a651a\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.440084 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-sg-core-conf-yaml\") pod \"197eb701-3dd5-4042-bad2-0d007a7a651a\" (UID: \"197eb701-3dd5-4042-bad2-0d007a7a651a\") " Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.457665 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "197eb701-3dd5-4042-bad2-0d007a7a651a" (UID: "197eb701-3dd5-4042-bad2-0d007a7a651a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.461246 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "197eb701-3dd5-4042-bad2-0d007a7a651a" (UID: "197eb701-3dd5-4042-bad2-0d007a7a651a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.475092 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-scripts" (OuterVolumeSpecName: "scripts") pod "197eb701-3dd5-4042-bad2-0d007a7a651a" (UID: "197eb701-3dd5-4042-bad2-0d007a7a651a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.479265 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/197eb701-3dd5-4042-bad2-0d007a7a651a-kube-api-access-x4hvj" (OuterVolumeSpecName: "kube-api-access-x4hvj") pod "197eb701-3dd5-4042-bad2-0d007a7a651a" (UID: "197eb701-3dd5-4042-bad2-0d007a7a651a"). InnerVolumeSpecName "kube-api-access-x4hvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.509745 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "197eb701-3dd5-4042-bad2-0d007a7a651a" (UID: "197eb701-3dd5-4042-bad2-0d007a7a651a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.543342 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.543389 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/197eb701-3dd5-4042-bad2-0d007a7a651a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.543407 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4hvj\" (UniqueName: \"kubernetes.io/projected/197eb701-3dd5-4042-bad2-0d007a7a651a-kube-api-access-x4hvj\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.543419 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.543435 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.685369 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "197eb701-3dd5-4042-bad2-0d007a7a651a" (UID: "197eb701-3dd5-4042-bad2-0d007a7a651a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.685740 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f7b97458f-lsgqv" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.747642 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-config-data" (OuterVolumeSpecName: "config-data") pod "197eb701-3dd5-4042-bad2-0d007a7a651a" (UID: "197eb701-3dd5-4042-bad2-0d007a7a651a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.756528 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.756557 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197eb701-3dd5-4042-bad2-0d007a7a651a-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.790031 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65698f94df-gbcsr"] Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.790883 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65698f94df-gbcsr" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-api" containerID="cri-o://70082dd99b1e75ae290840b2beecb2c0623428c56eb38e096e8ef81392a0a651" gracePeriod=30 Feb 24 15:14:50 crc kubenswrapper[4982]: I0224 15:14:50.792280 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-65698f94df-gbcsr" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-httpd" containerID="cri-o://b9ffcb63eb862dc15ad4ec83fbdbd8580f2dcffded18d222cce0c73c8e5907cc" gracePeriod=30 Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.179680 4982 generic.go:334] "Generic (PLEG): container finished" podID="4a4b4caf-4419-4612-b1a9-72250327c2f3" containerID="cf9dec3cb5db229962f6f293ee64bef564e03b68399001012d0e7a2d31d0a6cc" exitCode=1 Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.180567 4982 scope.go:117] "RemoveContainer" containerID="cf9dec3cb5db229962f6f293ee64bef564e03b68399001012d0e7a2d31d0a6cc" Feb 24 15:14:51 crc kubenswrapper[4982]: E0224 15:14:51.180811 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-c5b67c676-tlqrh_openstack(4a4b4caf-4419-4612-b1a9-72250327c2f3)\"" pod="openstack/heat-api-c5b67c676-tlqrh" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.192064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c5b67c676-tlqrh" event={"ID":"4a4b4caf-4419-4612-b1a9-72250327c2f3","Type":"ContainerDied","Data":"cf9dec3cb5db229962f6f293ee64bef564e03b68399001012d0e7a2d31d0a6cc"} Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.192119 4982 scope.go:117] "RemoveContainer" containerID="ca0a40681ff598ee1068be58e285889e72999c6b4a8dd898d2101c320e847c67" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.192337 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.192456 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"197eb701-3dd5-4042-bad2-0d007a7a651a","Type":"ContainerDied","Data":"14f56f0cafdcc53348364cd4d935bd317c1089ffc5c226dfb1034fc7b88072fc"} Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.200223 4982 generic.go:334] "Generic (PLEG): container finished" podID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" containerID="eb93a46b001942fa4777e62653fa940092106d165ab98aefbed1d34b6d478cec" exitCode=1 Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.200311 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" event={"ID":"ae38b650-d0e6-4fad-9117-c3b6e4d07825","Type":"ContainerDied","Data":"eb93a46b001942fa4777e62653fa940092106d165ab98aefbed1d34b6d478cec"} Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.203478 4982 scope.go:117] "RemoveContainer" containerID="eb93a46b001942fa4777e62653fa940092106d165ab98aefbed1d34b6d478cec" Feb 24 15:14:51 crc kubenswrapper[4982]: E0224 15:14:51.203870 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d67c96fcc-6mgct_openstack(ae38b650-d0e6-4fad-9117-c3b6e4d07825)\"" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.209241 4982 generic.go:334] "Generic (PLEG): container finished" podID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerID="b9ffcb63eb862dc15ad4ec83fbdbd8580f2dcffded18d222cce0c73c8e5907cc" exitCode=0 Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.209288 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65698f94df-gbcsr" event={"ID":"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef","Type":"ContainerDied","Data":"b9ffcb63eb862dc15ad4ec83fbdbd8580f2dcffded18d222cce0c73c8e5907cc"} Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.342555 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.369663 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.380876 4982 scope.go:117] "RemoveContainer" containerID="be09c86fb37803eed1ad5a9b5d1bae619d2ef2ea67103071f023ad58b2bc102e" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.408547 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:51 crc kubenswrapper[4982]: E0224 15:14:51.409043 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="ceilometer-notification-agent" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.409055 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="ceilometer-notification-agent" Feb 24 15:14:51 crc kubenswrapper[4982]: E0224 15:14:51.409072 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="proxy-httpd" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.409078 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="proxy-httpd" Feb 24 15:14:51 crc kubenswrapper[4982]: E0224 15:14:51.409108 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="sg-core" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.409115 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="sg-core" Feb 24 15:14:51 crc kubenswrapper[4982]: E0224 15:14:51.409129 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="ceilometer-central-agent" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.409135 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="ceilometer-central-agent" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.409337 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="sg-core" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.409348 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="proxy-httpd" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.409375 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="ceilometer-notification-agent" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.409385 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" containerName="ceilometer-central-agent" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.411307 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.416892 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.426387 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.427439 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.481571 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-config-data\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.481642 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-log-httpd\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.481712 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.481763 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.481796 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-scripts\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.481867 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsc57\" (UniqueName: \"kubernetes.io/projected/c1571922-8d74-4fb5-bf86-1093938b554d-kube-api-access-dsc57\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.481883 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-run-httpd\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.487885 4982 scope.go:117] "RemoveContainer" containerID="8353ece15dd897a0d901e5ff563b7083cd6e2b86e922d51e1c2a5f0475a4aed2" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.552616 4982 scope.go:117] "RemoveContainer" containerID="a5ca219c862f87aadeb6ad7f178ed1d1749a5b1caa15c30b4fe997519c0fe18c" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.584791 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-log-httpd\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.584892 4982 scope.go:117] "RemoveContainer" containerID="bc264d94a6d530fc330724539b141c9bdec1a9060bdadfda6c05e2d34924877c" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.584915 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.584980 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.585017 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-scripts\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.585098 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsc57\" (UniqueName: \"kubernetes.io/projected/c1571922-8d74-4fb5-bf86-1093938b554d-kube-api-access-dsc57\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.585133 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-run-httpd\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.585218 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-config-data\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.585808 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-log-httpd\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.586128 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-run-httpd\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.591038 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.592166 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.592696 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-scripts\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.593423 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-config-data\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.603871 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsc57\" (UniqueName: \"kubernetes.io/projected/c1571922-8d74-4fb5-bf86-1093938b554d-kube-api-access-dsc57\") pod \"ceilometer-0\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " pod="openstack/ceilometer-0" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.623483 4982 scope.go:117] "RemoveContainer" containerID="54b8d6386aa23049e5a48a5170c171a6a70a1836c68e99d9b712359ce1ae133c" Feb 24 15:14:51 crc kubenswrapper[4982]: I0224 15:14:51.758002 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.239860 4982 generic.go:334] "Generic (PLEG): container finished" podID="26101d60-8191-4669-92ab-200740ed3cf8" containerID="616a12ba6b7ffa85d5af7a82f71f9af9d5a280686817ea3d4e410492ba111475" exitCode=137 Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.240042 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26101d60-8191-4669-92ab-200740ed3cf8","Type":"ContainerDied","Data":"616a12ba6b7ffa85d5af7a82f71f9af9d5a280686817ea3d4e410492ba111475"} Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.246767 4982 scope.go:117] "RemoveContainer" containerID="cf9dec3cb5db229962f6f293ee64bef564e03b68399001012d0e7a2d31d0a6cc" Feb 24 15:14:52 crc kubenswrapper[4982]: E0224 15:14:52.247528 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-c5b67c676-tlqrh_openstack(4a4b4caf-4419-4612-b1a9-72250327c2f3)\"" pod="openstack/heat-api-c5b67c676-tlqrh" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.259564 4982 scope.go:117] "RemoveContainer" containerID="eb93a46b001942fa4777e62653fa940092106d165ab98aefbed1d34b6d478cec" Feb 24 15:14:52 crc kubenswrapper[4982]: E0224 15:14:52.264772 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d67c96fcc-6mgct_openstack(ae38b650-d0e6-4fad-9117-c3b6e4d07825)\"" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.315559 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:14:52 crc kubenswrapper[4982]: W0224 15:14:52.316678 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c WatchSource:0}: Error finding container 497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c: Status 404 returned error can't find the container with id 497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.506065 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.608581 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data\") pod \"26101d60-8191-4669-92ab-200740ed3cf8\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.608652 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rgn9\" (UniqueName: \"kubernetes.io/projected/26101d60-8191-4669-92ab-200740ed3cf8-kube-api-access-2rgn9\") pod \"26101d60-8191-4669-92ab-200740ed3cf8\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.608713 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/26101d60-8191-4669-92ab-200740ed3cf8-etc-machine-id\") pod \"26101d60-8191-4669-92ab-200740ed3cf8\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.608857 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26101d60-8191-4669-92ab-200740ed3cf8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "26101d60-8191-4669-92ab-200740ed3cf8" (UID: "26101d60-8191-4669-92ab-200740ed3cf8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.609544 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-combined-ca-bundle\") pod \"26101d60-8191-4669-92ab-200740ed3cf8\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.609602 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26101d60-8191-4669-92ab-200740ed3cf8-logs\") pod \"26101d60-8191-4669-92ab-200740ed3cf8\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.609691 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data-custom\") pod \"26101d60-8191-4669-92ab-200740ed3cf8\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.609785 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-scripts\") pod \"26101d60-8191-4669-92ab-200740ed3cf8\" (UID: \"26101d60-8191-4669-92ab-200740ed3cf8\") " Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.610483 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26101d60-8191-4669-92ab-200740ed3cf8-logs" (OuterVolumeSpecName: "logs") pod "26101d60-8191-4669-92ab-200740ed3cf8" (UID: "26101d60-8191-4669-92ab-200740ed3cf8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.611439 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/26101d60-8191-4669-92ab-200740ed3cf8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.611476 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26101d60-8191-4669-92ab-200740ed3cf8-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.625309 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-scripts" (OuterVolumeSpecName: "scripts") pod "26101d60-8191-4669-92ab-200740ed3cf8" (UID: "26101d60-8191-4669-92ab-200740ed3cf8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.625412 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "26101d60-8191-4669-92ab-200740ed3cf8" (UID: "26101d60-8191-4669-92ab-200740ed3cf8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.625404 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26101d60-8191-4669-92ab-200740ed3cf8-kube-api-access-2rgn9" (OuterVolumeSpecName: "kube-api-access-2rgn9") pod "26101d60-8191-4669-92ab-200740ed3cf8" (UID: "26101d60-8191-4669-92ab-200740ed3cf8"). InnerVolumeSpecName "kube-api-access-2rgn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.653819 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26101d60-8191-4669-92ab-200740ed3cf8" (UID: "26101d60-8191-4669-92ab-200740ed3cf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.713661 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rgn9\" (UniqueName: \"kubernetes.io/projected/26101d60-8191-4669-92ab-200740ed3cf8-kube-api-access-2rgn9\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.713689 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.713700 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.713709 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.717584 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data" (OuterVolumeSpecName: "config-data") pod "26101d60-8191-4669-92ab-200740ed3cf8" (UID: "26101d60-8191-4669-92ab-200740ed3cf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:52 crc kubenswrapper[4982]: I0224 15:14:52.815833 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26101d60-8191-4669-92ab-200740ed3cf8-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.045371 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.158363 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="197eb701-3dd5-4042-bad2-0d007a7a651a" path="/var/lib/kubelet/pods/197eb701-3dd5-4042-bad2-0d007a7a651a/volumes" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.229138 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.269989 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"26101d60-8191-4669-92ab-200740ed3cf8","Type":"ContainerDied","Data":"2628423c2798ebd04134cf540e1f6321e981ad32cca164632598b27c6a143c45"} Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.270044 4982 scope.go:117] "RemoveContainer" containerID="616a12ba6b7ffa85d5af7a82f71f9af9d5a280686817ea3d4e410492ba111475" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.270185 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.276967 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerStarted","Data":"dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe"} Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.277013 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerStarted","Data":"497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c"} Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.304740 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.306408 4982 scope.go:117] "RemoveContainer" containerID="975184eaf128e0f78606254c21956873d47cf17215c7f3d1b586dddcc9a0cd73" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.317343 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.339365 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:53 crc kubenswrapper[4982]: E0224 15:14:53.339919 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.339940 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api" Feb 24 15:14:53 crc kubenswrapper[4982]: E0224 15:14:53.339962 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api-log" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.339968 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api-log" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.340328 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api-log" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.340363 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="26101d60-8191-4669-92ab-200740ed3cf8" containerName="cinder-api" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.342038 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.348670 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.351390 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.351644 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.352483 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.355544 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.374145 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.432629 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crj8t\" (UniqueName: \"kubernetes.io/projected/0aa9e47a-4c17-47f4-9541-60b8f91236fd-kube-api-access-crj8t\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.432795 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0aa9e47a-4c17-47f4-9541-60b8f91236fd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.432969 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-config-data\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.432996 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.433099 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-scripts\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.433140 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-config-data-custom\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.433274 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa9e47a-4c17-47f4-9541-60b8f91236fd-logs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.433368 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.433406 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.469901 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g5cjx"] Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.470377 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" podUID="7043c05a-72b5-47d9-a561-c562f82ae807" containerName="dnsmasq-dns" containerID="cri-o://7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31" gracePeriod=10 Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.539196 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-scripts\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.539596 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-config-data-custom\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.539702 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa9e47a-4c17-47f4-9541-60b8f91236fd-logs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.539767 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.539801 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.539843 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crj8t\" (UniqueName: \"kubernetes.io/projected/0aa9e47a-4c17-47f4-9541-60b8f91236fd-kube-api-access-crj8t\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.539920 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0aa9e47a-4c17-47f4-9541-60b8f91236fd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.540011 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-config-data\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.540028 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa9e47a-4c17-47f4-9541-60b8f91236fd-logs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.540035 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.540384 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0aa9e47a-4c17-47f4-9541-60b8f91236fd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.552884 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.555363 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-config-data\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.555919 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.556442 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-config-data-custom\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.562176 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-scripts\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.563745 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa9e47a-4c17-47f4-9541-60b8f91236fd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.570968 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crj8t\" (UniqueName: \"kubernetes.io/projected/0aa9e47a-4c17-47f4-9541-60b8f91236fd-kube-api-access-crj8t\") pod \"cinder-api-0\" (UID: \"0aa9e47a-4c17-47f4-9541-60b8f91236fd\") " pod="openstack/cinder-api-0" Feb 24 15:14:53 crc kubenswrapper[4982]: I0224 15:14:53.674075 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.239132 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.263452 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-svc\") pod \"7043c05a-72b5-47d9-a561-c562f82ae807\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.263663 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-config\") pod \"7043c05a-72b5-47d9-a561-c562f82ae807\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.263735 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-sb\") pod \"7043c05a-72b5-47d9-a561-c562f82ae807\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.263826 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-swift-storage-0\") pod \"7043c05a-72b5-47d9-a561-c562f82ae807\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.263957 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-nb\") pod \"7043c05a-72b5-47d9-a561-c562f82ae807\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.263979 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6n4p\" (UniqueName: \"kubernetes.io/projected/7043c05a-72b5-47d9-a561-c562f82ae807-kube-api-access-f6n4p\") pod \"7043c05a-72b5-47d9-a561-c562f82ae807\" (UID: \"7043c05a-72b5-47d9-a561-c562f82ae807\") " Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.291774 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7043c05a-72b5-47d9-a561-c562f82ae807-kube-api-access-f6n4p" (OuterVolumeSpecName: "kube-api-access-f6n4p") pod "7043c05a-72b5-47d9-a561-c562f82ae807" (UID: "7043c05a-72b5-47d9-a561-c562f82ae807"). InnerVolumeSpecName "kube-api-access-f6n4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.339884 4982 generic.go:334] "Generic (PLEG): container finished" podID="7043c05a-72b5-47d9-a561-c562f82ae807" containerID="7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31" exitCode=0 Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.339985 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" event={"ID":"7043c05a-72b5-47d9-a561-c562f82ae807","Type":"ContainerDied","Data":"7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31"} Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.340040 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" event={"ID":"7043c05a-72b5-47d9-a561-c562f82ae807","Type":"ContainerDied","Data":"c2ba41113c4eef84072fdc1f268654796009aa0f612247b179c26ba4083fcae9"} Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.340057 4982 scope.go:117] "RemoveContainer" containerID="7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.340267 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g5cjx" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.379439 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.403040 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerStarted","Data":"2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2"} Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.403722 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7043c05a-72b5-47d9-a561-c562f82ae807" (UID: "7043c05a-72b5-47d9-a561-c562f82ae807"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:54 crc kubenswrapper[4982]: W0224 15:14:54.403943 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0aa9e47a_4c17_47f4_9541_60b8f91236fd.slice/crio-808804c0c51155707473fa65b7e7b79e0ccaa8df699c8c339cc41e173b4fae39 WatchSource:0}: Error finding container 808804c0c51155707473fa65b7e7b79e0ccaa8df699c8c339cc41e173b4fae39: Status 404 returned error can't find the container with id 808804c0c51155707473fa65b7e7b79e0ccaa8df699c8c339cc41e173b4fae39 Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.405965 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.405989 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6n4p\" (UniqueName: \"kubernetes.io/projected/7043c05a-72b5-47d9-a561-c562f82ae807-kube-api-access-f6n4p\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.414053 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7043c05a-72b5-47d9-a561-c562f82ae807" (UID: "7043c05a-72b5-47d9-a561-c562f82ae807"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.432205 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7043c05a-72b5-47d9-a561-c562f82ae807" (UID: "7043c05a-72b5-47d9-a561-c562f82ae807"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.435380 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7043c05a-72b5-47d9-a561-c562f82ae807" (UID: "7043c05a-72b5-47d9-a561-c562f82ae807"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.438273 4982 scope.go:117] "RemoveContainer" containerID="4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.452100 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-config" (OuterVolumeSpecName: "config") pod "7043c05a-72b5-47d9-a561-c562f82ae807" (UID: "7043c05a-72b5-47d9-a561-c562f82ae807"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.507726 4982 scope.go:117] "RemoveContainer" containerID="7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.509440 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.509461 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.509473 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.509483 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7043c05a-72b5-47d9-a561-c562f82ae807-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:54 crc kubenswrapper[4982]: E0224 15:14:54.514622 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31\": container with ID starting with 7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31 not found: ID does not exist" containerID="7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.514657 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31"} err="failed to get container status \"7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31\": rpc error: code = NotFound desc = could not find container \"7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31\": container with ID starting with 7cc3803376f4978feb34c1bbe9afc54374875e41a12d89291eb05a082b969f31 not found: ID does not exist" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.514681 4982 scope.go:117] "RemoveContainer" containerID="4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd" Feb 24 15:14:54 crc kubenswrapper[4982]: E0224 15:14:54.515701 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd\": container with ID starting with 4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd not found: ID does not exist" containerID="4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.515757 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd"} err="failed to get container status \"4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd\": rpc error: code = NotFound desc = could not find container \"4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd\": container with ID starting with 4e0f76df1f903fc93ecc48ae854c5d43307015ae8400aac4b566345ee7178ddd not found: ID does not exist" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.657290 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.657343 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.658259 4982 scope.go:117] "RemoveContainer" containerID="cf9dec3cb5db229962f6f293ee64bef564e03b68399001012d0e7a2d31d0a6cc" Feb 24 15:14:54 crc kubenswrapper[4982]: E0224 15:14:54.658649 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-c5b67c676-tlqrh_openstack(4a4b4caf-4419-4612-b1a9-72250327c2f3)\"" pod="openstack/heat-api-c5b67c676-tlqrh" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.716689 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.717757 4982 scope.go:117] "RemoveContainer" containerID="eb93a46b001942fa4777e62653fa940092106d165ab98aefbed1d34b6d478cec" Feb 24 15:14:54 crc kubenswrapper[4982]: E0224 15:14:54.718037 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d67c96fcc-6mgct_openstack(ae38b650-d0e6-4fad-9117-c3b6e4d07825)\"" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.718141 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.763948 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g5cjx"] Feb 24 15:14:54 crc kubenswrapper[4982]: I0224 15:14:54.776058 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g5cjx"] Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.172951 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26101d60-8191-4669-92ab-200740ed3cf8" path="/var/lib/kubelet/pods/26101d60-8191-4669-92ab-200740ed3cf8/volumes" Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.173801 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7043c05a-72b5-47d9-a561-c562f82ae807" path="/var/lib/kubelet/pods/7043c05a-72b5-47d9-a561-c562f82ae807/volumes" Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.446131 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.452691 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerStarted","Data":"2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8"} Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.516831 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0aa9e47a-4c17-47f4-9541-60b8f91236fd","Type":"ContainerStarted","Data":"808804c0c51155707473fa65b7e7b79e0ccaa8df699c8c339cc41e173b4fae39"} Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.523546 4982 scope.go:117] "RemoveContainer" containerID="eb93a46b001942fa4777e62653fa940092106d165ab98aefbed1d34b6d478cec" Feb 24 15:14:55 crc kubenswrapper[4982]: E0224 15:14:55.523887 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6d67c96fcc-6mgct_openstack(ae38b650-d0e6-4fad-9117-c3b6e4d07825)\"" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.557276 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-c5b67c676-tlqrh"] Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.579925 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:14:55 crc kubenswrapper[4982]: I0224 15:14:55.865410 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.024028 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d67c96fcc-6mgct"] Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.214269 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.539765 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.561967 4982 generic.go:334] "Generic (PLEG): container finished" podID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerID="70082dd99b1e75ae290840b2beecb2c0623428c56eb38e096e8ef81392a0a651" exitCode=0 Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.562030 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65698f94df-gbcsr" event={"ID":"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef","Type":"ContainerDied","Data":"70082dd99b1e75ae290840b2beecb2c0623428c56eb38e096e8ef81392a0a651"} Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.570652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0aa9e47a-4c17-47f4-9541-60b8f91236fd","Type":"ContainerStarted","Data":"81efdc8bfcba9c70829811cb0688057aa92f6d20e8ea10a09791f6fb95fa1399"} Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.582253 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-c5b67c676-tlqrh" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.582491 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-c5b67c676-tlqrh" event={"ID":"4a4b4caf-4419-4612-b1a9-72250327c2f3","Type":"ContainerDied","Data":"6f9e2455654409571865f8c55e9447fca2d1e608e1ac2e185221f76ceb686893"} Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.582622 4982 scope.go:117] "RemoveContainer" containerID="cf9dec3cb5db229962f6f293ee64bef564e03b68399001012d0e7a2d31d0a6cc" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.682893 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxpk\" (UniqueName: \"kubernetes.io/projected/4a4b4caf-4419-4612-b1a9-72250327c2f3-kube-api-access-mwxpk\") pod \"4a4b4caf-4419-4612-b1a9-72250327c2f3\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.683284 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-combined-ca-bundle\") pod \"4a4b4caf-4419-4612-b1a9-72250327c2f3\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.683330 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data-custom\") pod \"4a4b4caf-4419-4612-b1a9-72250327c2f3\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.683585 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data\") pod \"4a4b4caf-4419-4612-b1a9-72250327c2f3\" (UID: \"4a4b4caf-4419-4612-b1a9-72250327c2f3\") " Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.691092 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a4b4caf-4419-4612-b1a9-72250327c2f3" (UID: "4a4b4caf-4419-4612-b1a9-72250327c2f3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.692593 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4b4caf-4419-4612-b1a9-72250327c2f3-kube-api-access-mwxpk" (OuterVolumeSpecName: "kube-api-access-mwxpk") pod "4a4b4caf-4419-4612-b1a9-72250327c2f3" (UID: "4a4b4caf-4419-4612-b1a9-72250327c2f3"). InnerVolumeSpecName "kube-api-access-mwxpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.762876 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a4b4caf-4419-4612-b1a9-72250327c2f3" (UID: "4a4b4caf-4419-4612-b1a9-72250327c2f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.788528 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxpk\" (UniqueName: \"kubernetes.io/projected/4a4b4caf-4419-4612-b1a9-72250327c2f3-kube-api-access-mwxpk\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.788557 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.788581 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:56 crc kubenswrapper[4982]: I0224 15:14:56.931131 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data" (OuterVolumeSpecName: "config-data") pod "4a4b4caf-4419-4612-b1a9-72250327c2f3" (UID: "4a4b4caf-4419-4612-b1a9-72250327c2f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.026030 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4b4caf-4419-4612-b1a9-72250327c2f3-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.058271 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.233299 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-combined-ca-bundle\") pod \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.233401 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data-custom\") pod \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.233577 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data\") pod \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.233675 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkzbd\" (UniqueName: \"kubernetes.io/projected/ae38b650-d0e6-4fad-9117-c3b6e4d07825-kube-api-access-fkzbd\") pod \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\" (UID: \"ae38b650-d0e6-4fad-9117-c3b6e4d07825\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.241837 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae38b650-d0e6-4fad-9117-c3b6e4d07825" (UID: "ae38b650-d0e6-4fad-9117-c3b6e4d07825"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.247754 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae38b650-d0e6-4fad-9117-c3b6e4d07825-kube-api-access-fkzbd" (OuterVolumeSpecName: "kube-api-access-fkzbd") pod "ae38b650-d0e6-4fad-9117-c3b6e4d07825" (UID: "ae38b650-d0e6-4fad-9117-c3b6e4d07825"). InnerVolumeSpecName "kube-api-access-fkzbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.265583 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-c5b67c676-tlqrh"] Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.275138 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-c5b67c676-tlqrh"] Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.314831 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae38b650-d0e6-4fad-9117-c3b6e4d07825" (UID: "ae38b650-d0e6-4fad-9117-c3b6e4d07825"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.336464 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.336510 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkzbd\" (UniqueName: \"kubernetes.io/projected/ae38b650-d0e6-4fad-9117-c3b6e4d07825-kube-api-access-fkzbd\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.336521 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.341421 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.345598 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data" (OuterVolumeSpecName: "config-data") pod "ae38b650-d0e6-4fad-9117-c3b6e4d07825" (UID: "ae38b650-d0e6-4fad-9117-c3b6e4d07825"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.438097 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-public-tls-certs\") pod \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.438275 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-combined-ca-bundle\") pod \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.438352 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-httpd-config\") pod \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.438376 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-ovndb-tls-certs\") pod \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.438419 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qll7t\" (UniqueName: \"kubernetes.io/projected/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-kube-api-access-qll7t\") pod \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.438437 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-config\") pod \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.438525 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-internal-tls-certs\") pod \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.439094 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae38b650-d0e6-4fad-9117-c3b6e4d07825-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.447637 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" (UID: "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.450099 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-kube-api-access-qll7t" (OuterVolumeSpecName: "kube-api-access-qll7t") pod "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" (UID: "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef"). InnerVolumeSpecName "kube-api-access-qll7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.513668 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" (UID: "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.539653 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" (UID: "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.540418 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-public-tls-certs\") pod \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\" (UID: \"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef\") " Feb 24 15:14:57 crc kubenswrapper[4982]: W0224 15:14:57.541066 4982 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef/volumes/kubernetes.io~secret/public-tls-certs Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.541079 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" (UID: "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.541409 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.541426 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.541437 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.541446 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qll7t\" (UniqueName: \"kubernetes.io/projected/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-kube-api-access-qll7t\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.557240 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-config" (OuterVolumeSpecName: "config") pod "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" (UID: "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.568718 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" (UID: "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.592793 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" (UID: "18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.594029 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" event={"ID":"ae38b650-d0e6-4fad-9117-c3b6e4d07825","Type":"ContainerDied","Data":"692f46c1785cd30a93ebf7a794d7ef7ffe6d3fe534cceb6459a6b146466e4368"} Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.594071 4982 scope.go:117] "RemoveContainer" containerID="eb93a46b001942fa4777e62653fa940092106d165ab98aefbed1d34b6d478cec" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.594146 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6d67c96fcc-6mgct" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.606122 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65698f94df-gbcsr" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.606160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65698f94df-gbcsr" event={"ID":"18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef","Type":"ContainerDied","Data":"7db4ffa7bf8ab0607ccef8bd268a29baa64c1f31635af64d4b837541714d7c4c"} Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.613092 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0aa9e47a-4c17-47f4-9541-60b8f91236fd","Type":"ContainerStarted","Data":"ab98160349b1717081ffa5cd812565ec00c2746ada2a8c7ecc032822a133c415"} Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.613268 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.643091 4982 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.643125 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.643138 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.645141 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6d67c96fcc-6mgct"] Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.652926 4982 scope.go:117] "RemoveContainer" containerID="b9ffcb63eb862dc15ad4ec83fbdbd8580f2dcffded18d222cce0c73c8e5907cc" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.673313 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6d67c96fcc-6mgct"] Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.683623 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.683604019 podStartE2EDuration="4.683604019s" podCreationTimestamp="2026-02-24 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:14:57.652939896 +0000 UTC m=+1559.271998389" watchObservedRunningTime="2026-02-24 15:14:57.683604019 +0000 UTC m=+1559.302662502" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.797423 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-65698f94df-gbcsr"] Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.801943 4982 scope.go:117] "RemoveContainer" containerID="70082dd99b1e75ae290840b2beecb2c0623428c56eb38e096e8ef81392a0a651" Feb 24 15:14:57 crc kubenswrapper[4982]: I0224 15:14:57.808921 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-65698f94df-gbcsr"] Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.163368 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" path="/var/lib/kubelet/pods/18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef/volumes" Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.164818 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" path="/var/lib/kubelet/pods/4a4b4caf-4419-4612-b1a9-72250327c2f3/volumes" Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.165451 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" path="/var/lib/kubelet/pods/ae38b650-d0e6-4fad-9117-c3b6e4d07825/volumes" Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.678852 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerStarted","Data":"d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556"} Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.679370 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.703275 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.537433193 podStartE2EDuration="8.703255632s" podCreationTimestamp="2026-02-24 15:14:51 +0000 UTC" firstStartedPulling="2026-02-24 15:14:52.320306731 +0000 UTC m=+1553.939365224" lastFinishedPulling="2026-02-24 15:14:58.48612918 +0000 UTC m=+1560.105187663" observedRunningTime="2026-02-24 15:14:59.700703843 +0000 UTC m=+1561.319762336" watchObservedRunningTime="2026-02-24 15:14:59.703255632 +0000 UTC m=+1561.322314125" Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.808466 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.808816 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerName="glance-log" containerID="cri-o://8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0" gracePeriod=30 Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.809048 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerName="glance-httpd" containerID="cri-o://49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9" gracePeriod=30 Feb 24 15:14:59 crc kubenswrapper[4982]: I0224 15:14:59.972334 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.020296 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7964bbc76-m2h7r"] Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.020522 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7964bbc76-m2h7r" podUID="a92127b8-25b0-4d2a-874d-91cf1acfdc79" containerName="heat-engine" containerID="cri-o://b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae" gracePeriod=60 Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.158338 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh"] Feb 24 15:15:00 crc kubenswrapper[4982]: E0224 15:15:00.158895 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-httpd" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.158908 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-httpd" Feb 24 15:15:00 crc kubenswrapper[4982]: E0224 15:15:00.158936 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" containerName="heat-api" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.158942 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" containerName="heat-api" Feb 24 15:15:00 crc kubenswrapper[4982]: E0224 15:15:00.158949 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" containerName="heat-cfnapi" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.158955 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" containerName="heat-cfnapi" Feb 24 15:15:00 crc kubenswrapper[4982]: E0224 15:15:00.158966 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7043c05a-72b5-47d9-a561-c562f82ae807" containerName="init" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.158972 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7043c05a-72b5-47d9-a561-c562f82ae807" containerName="init" Feb 24 15:15:00 crc kubenswrapper[4982]: E0224 15:15:00.158985 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7043c05a-72b5-47d9-a561-c562f82ae807" containerName="dnsmasq-dns" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.158990 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7043c05a-72b5-47d9-a561-c562f82ae807" containerName="dnsmasq-dns" Feb 24 15:15:00 crc kubenswrapper[4982]: E0224 15:15:00.159005 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-api" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159010 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-api" Feb 24 15:15:00 crc kubenswrapper[4982]: E0224 15:15:00.159029 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" containerName="heat-cfnapi" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159035 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" containerName="heat-cfnapi" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159239 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-api" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159250 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="18dc9cbb-bd6b-4210-9a3d-cc66a81f65ef" containerName="neutron-httpd" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159266 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" containerName="heat-api" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159275 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" containerName="heat-api" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159285 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" containerName="heat-cfnapi" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159295 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae38b650-d0e6-4fad-9117-c3b6e4d07825" containerName="heat-cfnapi" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.159311 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7043c05a-72b5-47d9-a561-c562f82ae807" containerName="dnsmasq-dns" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.160158 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.162285 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.165473 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.183374 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh"] Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.314003 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88kdz\" (UniqueName: \"kubernetes.io/projected/70b1a558-6f90-43fc-8319-c988ad2c3a1d-kube-api-access-88kdz\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.317899 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b1a558-6f90-43fc-8319-c988ad2c3a1d-secret-volume\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.319713 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b1a558-6f90-43fc-8319-c988ad2c3a1d-config-volume\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.422226 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b1a558-6f90-43fc-8319-c988ad2c3a1d-config-volume\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.422433 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88kdz\" (UniqueName: \"kubernetes.io/projected/70b1a558-6f90-43fc-8319-c988ad2c3a1d-kube-api-access-88kdz\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.422587 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b1a558-6f90-43fc-8319-c988ad2c3a1d-secret-volume\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.423322 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b1a558-6f90-43fc-8319-c988ad2c3a1d-config-volume\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.429097 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b1a558-6f90-43fc-8319-c988ad2c3a1d-secret-volume\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.446740 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88kdz\" (UniqueName: \"kubernetes.io/projected/70b1a558-6f90-43fc-8319-c988ad2c3a1d-kube-api-access-88kdz\") pod \"collect-profiles-29532435-2fszh\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.478857 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.701996 4982 generic.go:334] "Generic (PLEG): container finished" podID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerID="8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0" exitCode=143 Feb 24 15:15:00 crc kubenswrapper[4982]: I0224 15:15:00.702055 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed67ff00-778a-4253-8f39-52d8ecbcc41b","Type":"ContainerDied","Data":"8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0"} Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.064117 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh"] Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.206297 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.712012 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="ceilometer-central-agent" containerID="cri-o://dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe" gracePeriod=30 Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.712494 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="proxy-httpd" containerID="cri-o://d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556" gracePeriod=30 Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.712651 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="sg-core" containerID="cri-o://2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8" gracePeriod=30 Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.712696 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="ceilometer-notification-agent" containerID="cri-o://2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2" gracePeriod=30 Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.712800 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" event={"ID":"70b1a558-6f90-43fc-8319-c988ad2c3a1d","Type":"ContainerStarted","Data":"69548f0616e983abd01e19c9de2e2a8cebe06073987236aa63ec46a8433f14e3"} Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.712827 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" event={"ID":"70b1a558-6f90-43fc-8319-c988ad2c3a1d","Type":"ContainerStarted","Data":"5035cace40a401953fce871e529ed5996011d2e9da9e9a4d3421081a24e4ae55"} Feb 24 15:15:01 crc kubenswrapper[4982]: I0224 15:15:01.741084 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" podStartSLOduration=1.7410622409999998 podStartE2EDuration="1.741062241s" podCreationTimestamp="2026-02-24 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:15:01.730680163 +0000 UTC m=+1563.349738656" watchObservedRunningTime="2026-02-24 15:15:01.741062241 +0000 UTC m=+1563.360120744" Feb 24 15:15:02 crc kubenswrapper[4982]: I0224 15:15:02.724216 4982 generic.go:334] "Generic (PLEG): container finished" podID="c1571922-8d74-4fb5-bf86-1093938b554d" containerID="d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556" exitCode=0 Feb 24 15:15:02 crc kubenswrapper[4982]: I0224 15:15:02.724548 4982 generic.go:334] "Generic (PLEG): container finished" podID="c1571922-8d74-4fb5-bf86-1093938b554d" containerID="2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8" exitCode=2 Feb 24 15:15:02 crc kubenswrapper[4982]: I0224 15:15:02.724563 4982 generic.go:334] "Generic (PLEG): container finished" podID="c1571922-8d74-4fb5-bf86-1093938b554d" containerID="2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2" exitCode=0 Feb 24 15:15:02 crc kubenswrapper[4982]: I0224 15:15:02.724390 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerDied","Data":"d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556"} Feb 24 15:15:02 crc kubenswrapper[4982]: I0224 15:15:02.724631 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerDied","Data":"2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8"} Feb 24 15:15:02 crc kubenswrapper[4982]: I0224 15:15:02.724648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerDied","Data":"2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2"} Feb 24 15:15:03 crc kubenswrapper[4982]: E0224 15:15:03.199341 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 24 15:15:03 crc kubenswrapper[4982]: E0224 15:15:03.203748 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 24 15:15:03 crc kubenswrapper[4982]: E0224 15:15:03.205645 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 24 15:15:03 crc kubenswrapper[4982]: E0224 15:15:03.205692 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7964bbc76-m2h7r" podUID="a92127b8-25b0-4d2a-874d-91cf1acfdc79" containerName="heat-engine" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.611530 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.718341 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-combined-ca-bundle\") pod \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.718986 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.719036 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-config-data\") pod \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.719101 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddfm\" (UniqueName: \"kubernetes.io/projected/ed67ff00-778a-4253-8f39-52d8ecbcc41b-kube-api-access-pddfm\") pod \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.719118 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-public-tls-certs\") pod \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.719710 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-httpd-run\") pod \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.719736 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-logs\") pod \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.719785 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-scripts\") pod \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\" (UID: \"ed67ff00-778a-4253-8f39-52d8ecbcc41b\") " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.722141 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ed67ff00-778a-4253-8f39-52d8ecbcc41b" (UID: "ed67ff00-778a-4253-8f39-52d8ecbcc41b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.722382 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-logs" (OuterVolumeSpecName: "logs") pod "ed67ff00-778a-4253-8f39-52d8ecbcc41b" (UID: "ed67ff00-778a-4253-8f39-52d8ecbcc41b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.754744 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-scripts" (OuterVolumeSpecName: "scripts") pod "ed67ff00-778a-4253-8f39-52d8ecbcc41b" (UID: "ed67ff00-778a-4253-8f39-52d8ecbcc41b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.754810 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed67ff00-778a-4253-8f39-52d8ecbcc41b-kube-api-access-pddfm" (OuterVolumeSpecName: "kube-api-access-pddfm") pod "ed67ff00-778a-4253-8f39-52d8ecbcc41b" (UID: "ed67ff00-778a-4253-8f39-52d8ecbcc41b"). InnerVolumeSpecName "kube-api-access-pddfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.802850 4982 generic.go:334] "Generic (PLEG): container finished" podID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerID="49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9" exitCode=0 Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.802973 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed67ff00-778a-4253-8f39-52d8ecbcc41b","Type":"ContainerDied","Data":"49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9"} Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.803000 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ed67ff00-778a-4253-8f39-52d8ecbcc41b","Type":"ContainerDied","Data":"0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d"} Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.803046 4982 scope.go:117] "RemoveContainer" containerID="49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.803222 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.817740 4982 generic.go:334] "Generic (PLEG): container finished" podID="70b1a558-6f90-43fc-8319-c988ad2c3a1d" containerID="69548f0616e983abd01e19c9de2e2a8cebe06073987236aa63ec46a8433f14e3" exitCode=0 Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.817796 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" event={"ID":"70b1a558-6f90-43fc-8319-c988ad2c3a1d","Type":"ContainerDied","Data":"69548f0616e983abd01e19c9de2e2a8cebe06073987236aa63ec46a8433f14e3"} Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.824691 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pddfm\" (UniqueName: \"kubernetes.io/projected/ed67ff00-778a-4253-8f39-52d8ecbcc41b-kube-api-access-pddfm\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.824729 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.824763 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed67ff00-778a-4253-8f39-52d8ecbcc41b-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.824775 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.834645 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b" (OuterVolumeSpecName: "glance") pod "ed67ff00-778a-4253-8f39-52d8ecbcc41b" (UID: "ed67ff00-778a-4253-8f39-52d8ecbcc41b"). InnerVolumeSpecName "pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.861675 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed67ff00-778a-4253-8f39-52d8ecbcc41b" (UID: "ed67ff00-778a-4253-8f39-52d8ecbcc41b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.880402 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ed67ff00-778a-4253-8f39-52d8ecbcc41b" (UID: "ed67ff00-778a-4253-8f39-52d8ecbcc41b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.924334 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-config-data" (OuterVolumeSpecName: "config-data") pod "ed67ff00-778a-4253-8f39-52d8ecbcc41b" (UID: "ed67ff00-778a-4253-8f39-52d8ecbcc41b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.927451 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.927490 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.927556 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") on node \"crc\" " Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.927575 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed67ff00-778a-4253-8f39-52d8ecbcc41b-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.954162 4982 scope.go:117] "RemoveContainer" containerID="8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.965024 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.965263 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b") on node "crc" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.984627 4982 scope.go:117] "RemoveContainer" containerID="49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9" Feb 24 15:15:03 crc kubenswrapper[4982]: E0224 15:15:03.986637 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9\": container with ID starting with 49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9 not found: ID does not exist" containerID="49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.986700 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9"} err="failed to get container status \"49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9\": rpc error: code = NotFound desc = could not find container \"49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9\": container with ID starting with 49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9 not found: ID does not exist" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.986736 4982 scope.go:117] "RemoveContainer" containerID="8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0" Feb 24 15:15:03 crc kubenswrapper[4982]: E0224 15:15:03.987393 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0\": container with ID starting with 8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0 not found: ID does not exist" containerID="8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0" Feb 24 15:15:03 crc kubenswrapper[4982]: I0224 15:15:03.987464 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0"} err="failed to get container status \"8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0\": rpc error: code = NotFound desc = could not find container \"8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0\": container with ID starting with 8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0 not found: ID does not exist" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.029867 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.172436 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.191215 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.232023 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:15:04 crc kubenswrapper[4982]: E0224 15:15:04.232510 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" containerName="heat-api" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.232523 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4b4caf-4419-4612-b1a9-72250327c2f3" containerName="heat-api" Feb 24 15:15:04 crc kubenswrapper[4982]: E0224 15:15:04.232543 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerName="glance-httpd" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.232549 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerName="glance-httpd" Feb 24 15:15:04 crc kubenswrapper[4982]: E0224 15:15:04.232559 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerName="glance-log" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.232565 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerName="glance-log" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.232810 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerName="glance-httpd" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.232835 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" containerName="glance-log" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.234149 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.236922 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.240172 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.258274 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.339936 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.340003 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd138cb-0c04-4683-9af6-b623fb39c84f-logs\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.340072 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.340115 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.340235 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.340269 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5vr\" (UniqueName: \"kubernetes.io/projected/dfd138cb-0c04-4683-9af6-b623fb39c84f-kube-api-access-2q5vr\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.341133 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.341240 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfd138cb-0c04-4683-9af6-b623fb39c84f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.443647 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.443699 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5vr\" (UniqueName: \"kubernetes.io/projected/dfd138cb-0c04-4683-9af6-b623fb39c84f-kube-api-access-2q5vr\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.443728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.444339 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfd138cb-0c04-4683-9af6-b623fb39c84f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.444513 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.444598 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd138cb-0c04-4683-9af6-b623fb39c84f-logs\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.444742 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.444828 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.445592 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfd138cb-0c04-4683-9af6-b623fb39c84f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.445815 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd138cb-0c04-4683-9af6-b623fb39c84f-logs\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.449954 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.450173 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.450769 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.455980 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfd138cb-0c04-4683-9af6-b623fb39c84f-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.462297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5vr\" (UniqueName: \"kubernetes.io/projected/dfd138cb-0c04-4683-9af6-b623fb39c84f-kube-api-access-2q5vr\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.463599 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.463697 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e56090e32bcbf36b2e5fe0665116dc217294b29490f5f5ced3111bea03953390/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.532526 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-064368d5-4a72-4b3a-a0dc-79d0edc5c07b\") pod \"glance-default-external-api-0\" (UID: \"dfd138cb-0c04-4683-9af6-b623fb39c84f\") " pod="openstack/glance-default-external-api-0" Feb 24 15:15:04 crc kubenswrapper[4982]: I0224 15:15:04.556182 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.295319 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed67ff00-778a-4253-8f39-52d8ecbcc41b" path="/var/lib/kubelet/pods/ed67ff00-778a-4253-8f39-52d8ecbcc41b/volumes" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.353694 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.353904 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerName="glance-log" containerID="cri-o://06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d" gracePeriod=30 Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.354394 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerName="glance-httpd" containerID="cri-o://8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73" gracePeriod=30 Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.376610 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.461380 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.587486 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88kdz\" (UniqueName: \"kubernetes.io/projected/70b1a558-6f90-43fc-8319-c988ad2c3a1d-kube-api-access-88kdz\") pod \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.588107 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b1a558-6f90-43fc-8319-c988ad2c3a1d-secret-volume\") pod \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.588192 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b1a558-6f90-43fc-8319-c988ad2c3a1d-config-volume\") pod \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\" (UID: \"70b1a558-6f90-43fc-8319-c988ad2c3a1d\") " Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.589623 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b1a558-6f90-43fc-8319-c988ad2c3a1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "70b1a558-6f90-43fc-8319-c988ad2c3a1d" (UID: "70b1a558-6f90-43fc-8319-c988ad2c3a1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.597554 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b1a558-6f90-43fc-8319-c988ad2c3a1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70b1a558-6f90-43fc-8319-c988ad2c3a1d" (UID: "70b1a558-6f90-43fc-8319-c988ad2c3a1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.599157 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b1a558-6f90-43fc-8319-c988ad2c3a1d-kube-api-access-88kdz" (OuterVolumeSpecName: "kube-api-access-88kdz") pod "70b1a558-6f90-43fc-8319-c988ad2c3a1d" (UID: "70b1a558-6f90-43fc-8319-c988ad2c3a1d"). InnerVolumeSpecName "kube-api-access-88kdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.691653 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b1a558-6f90-43fc-8319-c988ad2c3a1d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.691690 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b1a558-6f90-43fc-8319-c988ad2c3a1d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.691700 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88kdz\" (UniqueName: \"kubernetes.io/projected/70b1a558-6f90-43fc-8319-c988ad2c3a1d-kube-api-access-88kdz\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.849423 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerID="06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d" exitCode=143 Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.849557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee0ec3ad-df19-4e19-a288-d6ca32779160","Type":"ContainerDied","Data":"06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d"} Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.855396 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" event={"ID":"70b1a558-6f90-43fc-8319-c988ad2c3a1d","Type":"ContainerDied","Data":"5035cace40a401953fce871e529ed5996011d2e9da9e9a4d3421081a24e4ae55"} Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.855436 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5035cace40a401953fce871e529ed5996011d2e9da9e9a4d3421081a24e4ae55" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.855430 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh" Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.858410 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfd138cb-0c04-4683-9af6-b623fb39c84f","Type":"ContainerStarted","Data":"293decf0b6cdefd487373ec92f64ccc4620da02ecbd0d71102d28a3cf2a270b8"} Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.859994 4982 generic.go:334] "Generic (PLEG): container finished" podID="a92127b8-25b0-4d2a-874d-91cf1acfdc79" containerID="b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae" exitCode=0 Feb 24 15:15:05 crc kubenswrapper[4982]: I0224 15:15:05.860044 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7964bbc76-m2h7r" event={"ID":"a92127b8-25b0-4d2a-874d-91cf1acfdc79","Type":"ContainerDied","Data":"b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae"} Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.560708 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.717294 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data-custom\") pod \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.717390 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data\") pod \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.717538 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgs87\" (UniqueName: \"kubernetes.io/projected/a92127b8-25b0-4d2a-874d-91cf1acfdc79-kube-api-access-wgs87\") pod \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.717579 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-combined-ca-bundle\") pod \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\" (UID: \"a92127b8-25b0-4d2a-874d-91cf1acfdc79\") " Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.737927 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92127b8-25b0-4d2a-874d-91cf1acfdc79-kube-api-access-wgs87" (OuterVolumeSpecName: "kube-api-access-wgs87") pod "a92127b8-25b0-4d2a-874d-91cf1acfdc79" (UID: "a92127b8-25b0-4d2a-874d-91cf1acfdc79"). InnerVolumeSpecName "kube-api-access-wgs87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.746554 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a92127b8-25b0-4d2a-874d-91cf1acfdc79" (UID: "a92127b8-25b0-4d2a-874d-91cf1acfdc79"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.790622 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a92127b8-25b0-4d2a-874d-91cf1acfdc79" (UID: "a92127b8-25b0-4d2a-874d-91cf1acfdc79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.805609 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data" (OuterVolumeSpecName: "config-data") pod "a92127b8-25b0-4d2a-874d-91cf1acfdc79" (UID: "a92127b8-25b0-4d2a-874d-91cf1acfdc79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.820491 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.820541 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.820550 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgs87\" (UniqueName: \"kubernetes.io/projected/a92127b8-25b0-4d2a-874d-91cf1acfdc79-kube-api-access-wgs87\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.820561 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92127b8-25b0-4d2a-874d-91cf1acfdc79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.872543 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7964bbc76-m2h7r" event={"ID":"a92127b8-25b0-4d2a-874d-91cf1acfdc79","Type":"ContainerDied","Data":"038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424"} Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.872596 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7964bbc76-m2h7r" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.872685 4982 scope.go:117] "RemoveContainer" containerID="b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae" Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.937839 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7964bbc76-m2h7r"] Feb 24 15:15:06 crc kubenswrapper[4982]: I0224 15:15:06.949434 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7964bbc76-m2h7r"] Feb 24 15:15:07 crc kubenswrapper[4982]: I0224 15:15:07.157262 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92127b8-25b0-4d2a-874d-91cf1acfdc79" path="/var/lib/kubelet/pods/a92127b8-25b0-4d2a-874d-91cf1acfdc79/volumes" Feb 24 15:15:07 crc kubenswrapper[4982]: I0224 15:15:07.348230 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 24 15:15:07 crc kubenswrapper[4982]: I0224 15:15:07.883941 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfd138cb-0c04-4683-9af6-b623fb39c84f","Type":"ContainerStarted","Data":"245bb431482495095e0cc2add283a820296234e83e3c286c06146e98fbae8480"} Feb 24 15:15:07 crc kubenswrapper[4982]: I0224 15:15:07.884286 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfd138cb-0c04-4683-9af6-b623fb39c84f","Type":"ContainerStarted","Data":"fd159d68bec5a101926f1163d08e5023da7228dbc76d4710f238f882f11c04d8"} Feb 24 15:15:07 crc kubenswrapper[4982]: I0224 15:15:07.915794 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.915768453 podStartE2EDuration="3.915768453s" podCreationTimestamp="2026-02-24 15:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:15:07.906275118 +0000 UTC m=+1569.525333611" watchObservedRunningTime="2026-02-24 15:15:07.915768453 +0000 UTC m=+1569.534826966" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.150432 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jchcv"] Feb 24 15:15:08 crc kubenswrapper[4982]: E0224 15:15:08.151484 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92127b8-25b0-4d2a-874d-91cf1acfdc79" containerName="heat-engine" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.151593 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92127b8-25b0-4d2a-874d-91cf1acfdc79" containerName="heat-engine" Feb 24 15:15:08 crc kubenswrapper[4982]: E0224 15:15:08.151690 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b1a558-6f90-43fc-8319-c988ad2c3a1d" containerName="collect-profiles" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.151742 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b1a558-6f90-43fc-8319-c988ad2c3a1d" containerName="collect-profiles" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.152018 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92127b8-25b0-4d2a-874d-91cf1acfdc79" containerName="heat-engine" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.152088 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b1a558-6f90-43fc-8319-c988ad2c3a1d" containerName="collect-profiles" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.154550 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.165613 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jchcv"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.263059 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-756sh"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.266345 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.266375 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7297d-6126-40bc-b5b4-54d01f6cb255-operator-scripts\") pod \"nova-api-db-create-jchcv\" (UID: \"62f7297d-6126-40bc-b5b4-54d01f6cb255\") " pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.266714 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf6gt\" (UniqueName: \"kubernetes.io/projected/62f7297d-6126-40bc-b5b4-54d01f6cb255-kube-api-access-kf6gt\") pod \"nova-api-db-create-jchcv\" (UID: \"62f7297d-6126-40bc-b5b4-54d01f6cb255\") " pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.277280 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-756sh"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.385314 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7297d-6126-40bc-b5b4-54d01f6cb255-operator-scripts\") pod \"nova-api-db-create-jchcv\" (UID: \"62f7297d-6126-40bc-b5b4-54d01f6cb255\") " pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.385441 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mft76\" (UniqueName: \"kubernetes.io/projected/30138906-e9e1-46d1-87e1-fd842efdf3ed-kube-api-access-mft76\") pod \"nova-cell0-db-create-756sh\" (UID: \"30138906-e9e1-46d1-87e1-fd842efdf3ed\") " pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.386222 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf6gt\" (UniqueName: \"kubernetes.io/projected/62f7297d-6126-40bc-b5b4-54d01f6cb255-kube-api-access-kf6gt\") pod \"nova-api-db-create-jchcv\" (UID: \"62f7297d-6126-40bc-b5b4-54d01f6cb255\") " pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.386461 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30138906-e9e1-46d1-87e1-fd842efdf3ed-operator-scripts\") pod \"nova-cell0-db-create-756sh\" (UID: \"30138906-e9e1-46d1-87e1-fd842efdf3ed\") " pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.387105 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7297d-6126-40bc-b5b4-54d01f6cb255-operator-scripts\") pod \"nova-api-db-create-jchcv\" (UID: \"62f7297d-6126-40bc-b5b4-54d01f6cb255\") " pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.395892 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dzs7v"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.401202 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.419215 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf6gt\" (UniqueName: \"kubernetes.io/projected/62f7297d-6126-40bc-b5b4-54d01f6cb255-kube-api-access-kf6gt\") pod \"nova-api-db-create-jchcv\" (UID: \"62f7297d-6126-40bc-b5b4-54d01f6cb255\") " pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.425850 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dzs7v"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.450804 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-63de-account-create-update-9mg92"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.465574 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-63de-account-create-update-9mg92"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.468481 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.475680 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.490693 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65e38160-7267-4499-85f3-e05aeff196dc-operator-scripts\") pod \"nova-cell1-db-create-dzs7v\" (UID: \"65e38160-7267-4499-85f3-e05aeff196dc\") " pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.490830 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30138906-e9e1-46d1-87e1-fd842efdf3ed-operator-scripts\") pod \"nova-cell0-db-create-756sh\" (UID: \"30138906-e9e1-46d1-87e1-fd842efdf3ed\") " pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.491090 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f85k\" (UniqueName: \"kubernetes.io/projected/65e38160-7267-4499-85f3-e05aeff196dc-kube-api-access-2f85k\") pod \"nova-cell1-db-create-dzs7v\" (UID: \"65e38160-7267-4499-85f3-e05aeff196dc\") " pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.491462 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mft76\" (UniqueName: \"kubernetes.io/projected/30138906-e9e1-46d1-87e1-fd842efdf3ed-kube-api-access-mft76\") pod \"nova-cell0-db-create-756sh\" (UID: \"30138906-e9e1-46d1-87e1-fd842efdf3ed\") " pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.491625 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30138906-e9e1-46d1-87e1-fd842efdf3ed-operator-scripts\") pod \"nova-cell0-db-create-756sh\" (UID: \"30138906-e9e1-46d1-87e1-fd842efdf3ed\") " pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.493280 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.511655 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mft76\" (UniqueName: \"kubernetes.io/projected/30138906-e9e1-46d1-87e1-fd842efdf3ed-kube-api-access-mft76\") pod \"nova-cell0-db-create-756sh\" (UID: \"30138906-e9e1-46d1-87e1-fd842efdf3ed\") " pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.587990 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1073-account-create-update-75xhn"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.590281 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.593232 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.600035 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f85k\" (UniqueName: \"kubernetes.io/projected/65e38160-7267-4499-85f3-e05aeff196dc-kube-api-access-2f85k\") pod \"nova-cell1-db-create-dzs7v\" (UID: \"65e38160-7267-4499-85f3-e05aeff196dc\") " pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.601118 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-operator-scripts\") pod \"nova-api-63de-account-create-update-9mg92\" (UID: \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\") " pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.601284 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99phn\" (UniqueName: \"kubernetes.io/projected/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-kube-api-access-99phn\") pod \"nova-api-63de-account-create-update-9mg92\" (UID: \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\") " pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.601570 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65e38160-7267-4499-85f3-e05aeff196dc-operator-scripts\") pod \"nova-cell1-db-create-dzs7v\" (UID: \"65e38160-7267-4499-85f3-e05aeff196dc\") " pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.603304 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.605612 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65e38160-7267-4499-85f3-e05aeff196dc-operator-scripts\") pod \"nova-cell1-db-create-dzs7v\" (UID: \"65e38160-7267-4499-85f3-e05aeff196dc\") " pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.621657 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f85k\" (UniqueName: \"kubernetes.io/projected/65e38160-7267-4499-85f3-e05aeff196dc-kube-api-access-2f85k\") pod \"nova-cell1-db-create-dzs7v\" (UID: \"65e38160-7267-4499-85f3-e05aeff196dc\") " pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.631398 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1073-account-create-update-75xhn"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.705428 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0517f81d-f724-4803-9e98-85d228c39b2f-operator-scripts\") pod \"nova-cell0-1073-account-create-update-75xhn\" (UID: \"0517f81d-f724-4803-9e98-85d228c39b2f\") " pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.705815 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-operator-scripts\") pod \"nova-api-63de-account-create-update-9mg92\" (UID: \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\") " pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.705917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmgn\" (UniqueName: \"kubernetes.io/projected/0517f81d-f724-4803-9e98-85d228c39b2f-kube-api-access-cgmgn\") pod \"nova-cell0-1073-account-create-update-75xhn\" (UID: \"0517f81d-f724-4803-9e98-85d228c39b2f\") " pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.706059 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99phn\" (UniqueName: \"kubernetes.io/projected/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-kube-api-access-99phn\") pod \"nova-api-63de-account-create-update-9mg92\" (UID: \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\") " pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.709630 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-operator-scripts\") pod \"nova-api-63de-account-create-update-9mg92\" (UID: \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\") " pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.745212 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99phn\" (UniqueName: \"kubernetes.io/projected/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-kube-api-access-99phn\") pod \"nova-api-63de-account-create-update-9mg92\" (UID: \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\") " pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.821771 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0517f81d-f724-4803-9e98-85d228c39b2f-operator-scripts\") pod \"nova-cell0-1073-account-create-update-75xhn\" (UID: \"0517f81d-f724-4803-9e98-85d228c39b2f\") " pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.821875 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmgn\" (UniqueName: \"kubernetes.io/projected/0517f81d-f724-4803-9e98-85d228c39b2f-kube-api-access-cgmgn\") pod \"nova-cell0-1073-account-create-update-75xhn\" (UID: \"0517f81d-f724-4803-9e98-85d228c39b2f\") " pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.828928 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-74e2-account-create-update-krmpd"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.830930 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.833519 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.834318 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0517f81d-f724-4803-9e98-85d228c39b2f-operator-scripts\") pod \"nova-cell0-1073-account-create-update-75xhn\" (UID: \"0517f81d-f724-4803-9e98-85d228c39b2f\") " pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.845951 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmgn\" (UniqueName: \"kubernetes.io/projected/0517f81d-f724-4803-9e98-85d228c39b2f-kube-api-access-cgmgn\") pod \"nova-cell0-1073-account-create-update-75xhn\" (UID: \"0517f81d-f724-4803-9e98-85d228c39b2f\") " pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.890602 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.896040 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-74e2-account-create-update-krmpd"] Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.905088 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.928298 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.931417 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdcc0e1-05fe-45a5-929e-eba045439850-operator-scripts\") pod \"nova-cell1-74e2-account-create-update-krmpd\" (UID: \"7bdcc0e1-05fe-45a5-929e-eba045439850\") " pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.931693 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k989g\" (UniqueName: \"kubernetes.io/projected/7bdcc0e1-05fe-45a5-929e-eba045439850-kube-api-access-k989g\") pod \"nova-cell1-74e2-account-create-update-krmpd\" (UID: \"7bdcc0e1-05fe-45a5-929e-eba045439850\") " pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.965821 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerID="8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73" exitCode=0 Feb 24 15:15:08 crc kubenswrapper[4982]: I0224 15:15:08.966279 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee0ec3ad-df19-4e19-a288-d6ca32779160","Type":"ContainerDied","Data":"8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73"} Feb 24 15:15:09 crc kubenswrapper[4982]: I0224 15:15:09.033851 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdcc0e1-05fe-45a5-929e-eba045439850-operator-scripts\") pod \"nova-cell1-74e2-account-create-update-krmpd\" (UID: \"7bdcc0e1-05fe-45a5-929e-eba045439850\") " pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:09 crc kubenswrapper[4982]: I0224 15:15:09.033962 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k989g\" (UniqueName: \"kubernetes.io/projected/7bdcc0e1-05fe-45a5-929e-eba045439850-kube-api-access-k989g\") pod \"nova-cell1-74e2-account-create-update-krmpd\" (UID: \"7bdcc0e1-05fe-45a5-929e-eba045439850\") " pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:09 crc kubenswrapper[4982]: I0224 15:15:09.035929 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdcc0e1-05fe-45a5-929e-eba045439850-operator-scripts\") pod \"nova-cell1-74e2-account-create-update-krmpd\" (UID: \"7bdcc0e1-05fe-45a5-929e-eba045439850\") " pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:09 crc kubenswrapper[4982]: I0224 15:15:09.067293 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k989g\" (UniqueName: \"kubernetes.io/projected/7bdcc0e1-05fe-45a5-929e-eba045439850-kube-api-access-k989g\") pod \"nova-cell1-74e2-account-create-update-krmpd\" (UID: \"7bdcc0e1-05fe-45a5-929e-eba045439850\") " pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:09 crc kubenswrapper[4982]: I0224 15:15:09.249060 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:09 crc kubenswrapper[4982]: I0224 15:15:09.617301 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-756sh"] Feb 24 15:15:09 crc kubenswrapper[4982]: I0224 15:15:09.798185 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jchcv"] Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.011673 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jchcv" event={"ID":"62f7297d-6126-40bc-b5b4-54d01f6cb255","Type":"ContainerStarted","Data":"ba0a125b2578dcf380c1db74d4961e4b4066ac035b5e84924fe75742a8bb9c6d"} Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.011905 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.017406 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-756sh" event={"ID":"30138906-e9e1-46d1-87e1-fd842efdf3ed","Type":"ContainerStarted","Data":"a11f69df1a759ede49bf3184231b8d0b57d4bbb0bdbdb1ed7ebd9cd3aecc1aa9"} Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.017769 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-63de-account-create-update-9mg92"] Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.035635 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee0ec3ad-df19-4e19-a288-d6ca32779160","Type":"ContainerDied","Data":"a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac"} Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.035682 4982 scope.go:117] "RemoveContainer" containerID="8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.035690 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.110211 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1073-account-create-update-75xhn"] Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.121425 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dzs7v"] Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.130662 4982 scope.go:117] "RemoveContainer" containerID="06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.193845 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-httpd-run\") pod \"ee0ec3ad-df19-4e19-a288-d6ca32779160\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.193942 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-internal-tls-certs\") pod \"ee0ec3ad-df19-4e19-a288-d6ca32779160\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.194002 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-config-data\") pod \"ee0ec3ad-df19-4e19-a288-d6ca32779160\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.195541 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ee0ec3ad-df19-4e19-a288-d6ca32779160" (UID: "ee0ec3ad-df19-4e19-a288-d6ca32779160"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.210110 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"ee0ec3ad-df19-4e19-a288-d6ca32779160\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.210477 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-scripts\") pod \"ee0ec3ad-df19-4e19-a288-d6ca32779160\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.210634 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-logs\") pod \"ee0ec3ad-df19-4e19-a288-d6ca32779160\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.210749 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtxrn\" (UniqueName: \"kubernetes.io/projected/ee0ec3ad-df19-4e19-a288-d6ca32779160-kube-api-access-xtxrn\") pod \"ee0ec3ad-df19-4e19-a288-d6ca32779160\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.210877 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-combined-ca-bundle\") pod \"ee0ec3ad-df19-4e19-a288-d6ca32779160\" (UID: \"ee0ec3ad-df19-4e19-a288-d6ca32779160\") " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.214250 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.216720 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-logs" (OuterVolumeSpecName: "logs") pod "ee0ec3ad-df19-4e19-a288-d6ca32779160" (UID: "ee0ec3ad-df19-4e19-a288-d6ca32779160"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.222829 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-scripts" (OuterVolumeSpecName: "scripts") pod "ee0ec3ad-df19-4e19-a288-d6ca32779160" (UID: "ee0ec3ad-df19-4e19-a288-d6ca32779160"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.241693 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0ec3ad-df19-4e19-a288-d6ca32779160-kube-api-access-xtxrn" (OuterVolumeSpecName: "kube-api-access-xtxrn") pod "ee0ec3ad-df19-4e19-a288-d6ca32779160" (UID: "ee0ec3ad-df19-4e19-a288-d6ca32779160"). InnerVolumeSpecName "kube-api-access-xtxrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.306784 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee0ec3ad-df19-4e19-a288-d6ca32779160" (UID: "ee0ec3ad-df19-4e19-a288-d6ca32779160"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.316184 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtxrn\" (UniqueName: \"kubernetes.io/projected/ee0ec3ad-df19-4e19-a288-d6ca32779160-kube-api-access-xtxrn\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.316215 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.316225 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.316234 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ec3ad-df19-4e19-a288-d6ca32779160-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.330661 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee" (OuterVolumeSpecName: "glance") pod "ee0ec3ad-df19-4e19-a288-d6ca32779160" (UID: "ee0ec3ad-df19-4e19-a288-d6ca32779160"). InnerVolumeSpecName "pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.418849 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") on node \"crc\" " Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.533622 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-74e2-account-create-update-krmpd"] Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.759242 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ee0ec3ad-df19-4e19-a288-d6ca32779160" (UID: "ee0ec3ad-df19-4e19-a288-d6ca32779160"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.767573 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-config-data" (OuterVolumeSpecName: "config-data") pod "ee0ec3ad-df19-4e19-a288-d6ca32779160" (UID: "ee0ec3ad-df19-4e19-a288-d6ca32779160"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.827931 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.828126 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee") on node "crc" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.843431 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.843482 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ec3ad-df19-4e19-a288-d6ca32779160-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:10 crc kubenswrapper[4982]: I0224 15:15:10.843513 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.015683 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.043912 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.094240 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:15:11 crc kubenswrapper[4982]: E0224 15:15:11.094903 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerName="glance-httpd" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.094920 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerName="glance-httpd" Feb 24 15:15:11 crc kubenswrapper[4982]: E0224 15:15:11.094951 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerName="glance-log" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.094958 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerName="glance-log" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.095347 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerName="glance-log" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.095401 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" containerName="glance-httpd" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.097759 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.102263 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.107629 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.116542 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dzs7v" event={"ID":"65e38160-7267-4499-85f3-e05aeff196dc","Type":"ContainerStarted","Data":"c9ecb86772e5920785aaa0be13b277fc995c932d9d62f5bba2a170f73af4bf7e"} Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.122538 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.132064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1073-account-create-update-75xhn" event={"ID":"0517f81d-f724-4803-9e98-85d228c39b2f","Type":"ContainerStarted","Data":"72b5515d02a0d818e1d4d55e1a209ce6f8723188bc0b42ea1c3e70105648c2e0"} Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.153561 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.153622 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-scripts\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.153681 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.153720 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62fed6a9-a36d-485f-bedc-cb54f1dad363-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.154413 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.154536 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-config-data\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.154842 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfc8d\" (UniqueName: \"kubernetes.io/projected/62fed6a9-a36d-485f-bedc-cb54f1dad363-kube-api-access-tfc8d\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.154917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62fed6a9-a36d-485f-bedc-cb54f1dad363-logs\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.193157 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0ec3ad-df19-4e19-a288-d6ca32779160" path="/var/lib/kubelet/pods/ee0ec3ad-df19-4e19-a288-d6ca32779160/volumes" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.194199 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74e2-account-create-update-krmpd" event={"ID":"7bdcc0e1-05fe-45a5-929e-eba045439850","Type":"ContainerStarted","Data":"c008a6b92367f15e605b1a4159c552b3f73da4096091f5b8253d9e777b6b307d"} Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.194240 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-63de-account-create-update-9mg92" event={"ID":"df39a573-272f-4a7b-a6f8-751f8b2c0dd9","Type":"ContainerStarted","Data":"688ab5fa56b17793d2d349b32f81ba4e6a6f6c9db71753db6c6313e2122a01d3"} Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.194256 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-63de-account-create-update-9mg92" event={"ID":"df39a573-272f-4a7b-a6f8-751f8b2c0dd9","Type":"ContainerStarted","Data":"c53cfc4df81d55875ac2c6fd2598e7531c0b61a45d127e1782813b7e8bb2aab8"} Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.194266 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jchcv" event={"ID":"62f7297d-6126-40bc-b5b4-54d01f6cb255","Type":"ContainerStarted","Data":"415fbec6a8c2fe97df97eec423d061b4573498a064e91032b4bd3fb146ef00a7"} Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.206132 4982 generic.go:334] "Generic (PLEG): container finished" podID="30138906-e9e1-46d1-87e1-fd842efdf3ed" containerID="d2e0c4920d696de3b11aae5d7cef578ebe839dcddba26e0a3883f0b9cb36f857" exitCode=0 Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.206185 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-756sh" event={"ID":"30138906-e9e1-46d1-87e1-fd842efdf3ed","Type":"ContainerDied","Data":"d2e0c4920d696de3b11aae5d7cef578ebe839dcddba26e0a3883f0b9cb36f857"} Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259036 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259095 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-scripts\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259157 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62fed6a9-a36d-485f-bedc-cb54f1dad363-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259232 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259254 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-config-data\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259298 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfc8d\" (UniqueName: \"kubernetes.io/projected/62fed6a9-a36d-485f-bedc-cb54f1dad363-kube-api-access-tfc8d\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62fed6a9-a36d-485f-bedc-cb54f1dad363-logs\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.259820 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62fed6a9-a36d-485f-bedc-cb54f1dad363-logs\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.268521 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.268843 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70a27d34e7ae7884b230e8e34a62af8eccdb60653e302a58734e40794a2eda43/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.271757 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/62fed6a9-a36d-485f-bedc-cb54f1dad363-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.273294 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-config-data\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.273336 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-63de-account-create-update-9mg92" podStartSLOduration=3.273314475 podStartE2EDuration="3.273314475s" podCreationTimestamp="2026-02-24 15:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:15:11.194960584 +0000 UTC m=+1572.814019077" watchObservedRunningTime="2026-02-24 15:15:11.273314475 +0000 UTC m=+1572.892372968" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.280564 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.283984 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-scripts\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.292329 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62fed6a9-a36d-485f-bedc-cb54f1dad363-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.297871 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfc8d\" (UniqueName: \"kubernetes.io/projected/62fed6a9-a36d-485f-bedc-cb54f1dad363-kube-api-access-tfc8d\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.313554 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-jchcv" podStartSLOduration=3.313529224 podStartE2EDuration="3.313529224s" podCreationTimestamp="2026-02-24 15:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:15:11.226987693 +0000 UTC m=+1572.846046196" watchObservedRunningTime="2026-02-24 15:15:11.313529224 +0000 UTC m=+1572.932587717" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.514133 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8ba581f2-b106-4ab3-a73b-5dba7fefb2ee\") pod \"glance-default-internal-api-0\" (UID: \"62fed6a9-a36d-485f-bedc-cb54f1dad363\") " pod="openstack/glance-default-internal-api-0" Feb 24 15:15:11 crc kubenswrapper[4982]: I0224 15:15:11.766216 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.224145 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1073-account-create-update-75xhn" event={"ID":"0517f81d-f724-4803-9e98-85d228c39b2f","Type":"ContainerStarted","Data":"042b6dd31c04595c351e9f6fa6b9f93d255513ce63973ef5f651731d0336130b"} Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.226333 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74e2-account-create-update-krmpd" event={"ID":"7bdcc0e1-05fe-45a5-929e-eba045439850","Type":"ContainerStarted","Data":"dc08518e95385eecca9fc0ea8d6fee5b91646461bbce5916d3534f64b9c2bc8f"} Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.231618 4982 generic.go:334] "Generic (PLEG): container finished" podID="df39a573-272f-4a7b-a6f8-751f8b2c0dd9" containerID="688ab5fa56b17793d2d349b32f81ba4e6a6f6c9db71753db6c6313e2122a01d3" exitCode=0 Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.231706 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-63de-account-create-update-9mg92" event={"ID":"df39a573-272f-4a7b-a6f8-751f8b2c0dd9","Type":"ContainerDied","Data":"688ab5fa56b17793d2d349b32f81ba4e6a6f6c9db71753db6c6313e2122a01d3"} Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.235273 4982 generic.go:334] "Generic (PLEG): container finished" podID="62f7297d-6126-40bc-b5b4-54d01f6cb255" containerID="415fbec6a8c2fe97df97eec423d061b4573498a064e91032b4bd3fb146ef00a7" exitCode=0 Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.235299 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jchcv" event={"ID":"62f7297d-6126-40bc-b5b4-54d01f6cb255","Type":"ContainerDied","Data":"415fbec6a8c2fe97df97eec423d061b4573498a064e91032b4bd3fb146ef00a7"} Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.237711 4982 generic.go:334] "Generic (PLEG): container finished" podID="65e38160-7267-4499-85f3-e05aeff196dc" containerID="6c1b068874aae73d0ee7716d047087a5a7b64dcf8d79f6b590725466c7db9bfa" exitCode=0 Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.237834 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dzs7v" event={"ID":"65e38160-7267-4499-85f3-e05aeff196dc","Type":"ContainerDied","Data":"6c1b068874aae73d0ee7716d047087a5a7b64dcf8d79f6b590725466c7db9bfa"} Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.256781 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1073-account-create-update-75xhn" podStartSLOduration=4.256736688 podStartE2EDuration="4.256736688s" podCreationTimestamp="2026-02-24 15:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:15:12.245874708 +0000 UTC m=+1573.864933211" watchObservedRunningTime="2026-02-24 15:15:12.256736688 +0000 UTC m=+1573.875795181" Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.449852 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.863955 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.948614 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mft76\" (UniqueName: \"kubernetes.io/projected/30138906-e9e1-46d1-87e1-fd842efdf3ed-kube-api-access-mft76\") pod \"30138906-e9e1-46d1-87e1-fd842efdf3ed\" (UID: \"30138906-e9e1-46d1-87e1-fd842efdf3ed\") " Feb 24 15:15:12 crc kubenswrapper[4982]: I0224 15:15:12.975886 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30138906-e9e1-46d1-87e1-fd842efdf3ed-kube-api-access-mft76" (OuterVolumeSpecName: "kube-api-access-mft76") pod "30138906-e9e1-46d1-87e1-fd842efdf3ed" (UID: "30138906-e9e1-46d1-87e1-fd842efdf3ed"). InnerVolumeSpecName "kube-api-access-mft76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.077366 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30138906-e9e1-46d1-87e1-fd842efdf3ed-operator-scripts\") pod \"30138906-e9e1-46d1-87e1-fd842efdf3ed\" (UID: \"30138906-e9e1-46d1-87e1-fd842efdf3ed\") " Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.078826 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mft76\" (UniqueName: \"kubernetes.io/projected/30138906-e9e1-46d1-87e1-fd842efdf3ed-kube-api-access-mft76\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.079443 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30138906-e9e1-46d1-87e1-fd842efdf3ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30138906-e9e1-46d1-87e1-fd842efdf3ed" (UID: "30138906-e9e1-46d1-87e1-fd842efdf3ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.181299 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30138906-e9e1-46d1-87e1-fd842efdf3ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.273718 4982 generic.go:334] "Generic (PLEG): container finished" podID="0517f81d-f724-4803-9e98-85d228c39b2f" containerID="042b6dd31c04595c351e9f6fa6b9f93d255513ce63973ef5f651731d0336130b" exitCode=0 Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.274106 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1073-account-create-update-75xhn" event={"ID":"0517f81d-f724-4803-9e98-85d228c39b2f","Type":"ContainerDied","Data":"042b6dd31c04595c351e9f6fa6b9f93d255513ce63973ef5f651731d0336130b"} Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.289894 4982 generic.go:334] "Generic (PLEG): container finished" podID="7bdcc0e1-05fe-45a5-929e-eba045439850" containerID="dc08518e95385eecca9fc0ea8d6fee5b91646461bbce5916d3534f64b9c2bc8f" exitCode=0 Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.290015 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74e2-account-create-update-krmpd" event={"ID":"7bdcc0e1-05fe-45a5-929e-eba045439850","Type":"ContainerDied","Data":"dc08518e95385eecca9fc0ea8d6fee5b91646461bbce5916d3534f64b9c2bc8f"} Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.302876 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-756sh" event={"ID":"30138906-e9e1-46d1-87e1-fd842efdf3ed","Type":"ContainerDied","Data":"a11f69df1a759ede49bf3184231b8d0b57d4bbb0bdbdb1ed7ebd9cd3aecc1aa9"} Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.302929 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a11f69df1a759ede49bf3184231b8d0b57d4bbb0bdbdb1ed7ebd9cd3aecc1aa9" Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.303014 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-756sh" Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.308458 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"62fed6a9-a36d-485f-bedc-cb54f1dad363","Type":"ContainerStarted","Data":"308b047aa81c398bef7b80303ece12c0f9e868d54a50078e0a5ad59350a2ebc7"} Feb 24 15:15:13 crc kubenswrapper[4982]: I0224 15:15:13.981986 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.036976 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65e38160-7267-4499-85f3-e05aeff196dc-operator-scripts\") pod \"65e38160-7267-4499-85f3-e05aeff196dc\" (UID: \"65e38160-7267-4499-85f3-e05aeff196dc\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.037276 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f85k\" (UniqueName: \"kubernetes.io/projected/65e38160-7267-4499-85f3-e05aeff196dc-kube-api-access-2f85k\") pod \"65e38160-7267-4499-85f3-e05aeff196dc\" (UID: \"65e38160-7267-4499-85f3-e05aeff196dc\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.039898 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e38160-7267-4499-85f3-e05aeff196dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65e38160-7267-4499-85f3-e05aeff196dc" (UID: "65e38160-7267-4499-85f3-e05aeff196dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.080805 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e38160-7267-4499-85f3-e05aeff196dc-kube-api-access-2f85k" (OuterVolumeSpecName: "kube-api-access-2f85k") pod "65e38160-7267-4499-85f3-e05aeff196dc" (UID: "65e38160-7267-4499-85f3-e05aeff196dc"). InnerVolumeSpecName "kube-api-access-2f85k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.141427 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65e38160-7267-4499-85f3-e05aeff196dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.141454 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f85k\" (UniqueName: \"kubernetes.io/projected/65e38160-7267-4499-85f3-e05aeff196dc-kube-api-access-2f85k\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.148337 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.159179 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.180326 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.254381 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf6gt\" (UniqueName: \"kubernetes.io/projected/62f7297d-6126-40bc-b5b4-54d01f6cb255-kube-api-access-kf6gt\") pod \"62f7297d-6126-40bc-b5b4-54d01f6cb255\" (UID: \"62f7297d-6126-40bc-b5b4-54d01f6cb255\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.254466 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k989g\" (UniqueName: \"kubernetes.io/projected/7bdcc0e1-05fe-45a5-929e-eba045439850-kube-api-access-k989g\") pod \"7bdcc0e1-05fe-45a5-929e-eba045439850\" (UID: \"7bdcc0e1-05fe-45a5-929e-eba045439850\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.254827 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99phn\" (UniqueName: \"kubernetes.io/projected/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-kube-api-access-99phn\") pod \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\" (UID: \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.255184 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdcc0e1-05fe-45a5-929e-eba045439850-operator-scripts\") pod \"7bdcc0e1-05fe-45a5-929e-eba045439850\" (UID: \"7bdcc0e1-05fe-45a5-929e-eba045439850\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.255225 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-operator-scripts\") pod \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\" (UID: \"df39a573-272f-4a7b-a6f8-751f8b2c0dd9\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.255280 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7297d-6126-40bc-b5b4-54d01f6cb255-operator-scripts\") pod \"62f7297d-6126-40bc-b5b4-54d01f6cb255\" (UID: \"62f7297d-6126-40bc-b5b4-54d01f6cb255\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.256076 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bdcc0e1-05fe-45a5-929e-eba045439850-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bdcc0e1-05fe-45a5-929e-eba045439850" (UID: "7bdcc0e1-05fe-45a5-929e-eba045439850"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.256325 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bdcc0e1-05fe-45a5-929e-eba045439850-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.260410 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f7297d-6126-40bc-b5b4-54d01f6cb255-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62f7297d-6126-40bc-b5b4-54d01f6cb255" (UID: "62f7297d-6126-40bc-b5b4-54d01f6cb255"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.259044 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df39a573-272f-4a7b-a6f8-751f8b2c0dd9" (UID: "df39a573-272f-4a7b-a6f8-751f8b2c0dd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.262380 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-kube-api-access-99phn" (OuterVolumeSpecName: "kube-api-access-99phn") pod "df39a573-272f-4a7b-a6f8-751f8b2c0dd9" (UID: "df39a573-272f-4a7b-a6f8-751f8b2c0dd9"). InnerVolumeSpecName "kube-api-access-99phn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.271798 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdcc0e1-05fe-45a5-929e-eba045439850-kube-api-access-k989g" (OuterVolumeSpecName: "kube-api-access-k989g") pod "7bdcc0e1-05fe-45a5-929e-eba045439850" (UID: "7bdcc0e1-05fe-45a5-929e-eba045439850"). InnerVolumeSpecName "kube-api-access-k989g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.290228 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f7297d-6126-40bc-b5b4-54d01f6cb255-kube-api-access-kf6gt" (OuterVolumeSpecName: "kube-api-access-kf6gt") pod "62f7297d-6126-40bc-b5b4-54d01f6cb255" (UID: "62f7297d-6126-40bc-b5b4-54d01f6cb255"). InnerVolumeSpecName "kube-api-access-kf6gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.346428 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"62fed6a9-a36d-485f-bedc-cb54f1dad363","Type":"ContainerStarted","Data":"c2117d2a37e651526df9bbe1eccbb7c036a83980151bca31852e40c7aeeb1ed9"} Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.359181 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.359219 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62f7297d-6126-40bc-b5b4-54d01f6cb255-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.359234 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf6gt\" (UniqueName: \"kubernetes.io/projected/62f7297d-6126-40bc-b5b4-54d01f6cb255-kube-api-access-kf6gt\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.359246 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k989g\" (UniqueName: \"kubernetes.io/projected/7bdcc0e1-05fe-45a5-929e-eba045439850-kube-api-access-k989g\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.359260 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99phn\" (UniqueName: \"kubernetes.io/projected/df39a573-272f-4a7b-a6f8-751f8b2c0dd9-kube-api-access-99phn\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.360825 4982 generic.go:334] "Generic (PLEG): container finished" podID="c1571922-8d74-4fb5-bf86-1093938b554d" containerID="dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe" exitCode=0 Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.360888 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerDied","Data":"dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe"} Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.365564 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74e2-account-create-update-krmpd" event={"ID":"7bdcc0e1-05fe-45a5-929e-eba045439850","Type":"ContainerDied","Data":"c008a6b92367f15e605b1a4159c552b3f73da4096091f5b8253d9e777b6b307d"} Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.365602 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c008a6b92367f15e605b1a4159c552b3f73da4096091f5b8253d9e777b6b307d" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.365665 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74e2-account-create-update-krmpd" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.385031 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-63de-account-create-update-9mg92" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.385100 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-63de-account-create-update-9mg92" event={"ID":"df39a573-272f-4a7b-a6f8-751f8b2c0dd9","Type":"ContainerDied","Data":"c53cfc4df81d55875ac2c6fd2598e7531c0b61a45d127e1782813b7e8bb2aab8"} Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.385172 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53cfc4df81d55875ac2c6fd2598e7531c0b61a45d127e1782813b7e8bb2aab8" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.391447 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jchcv" event={"ID":"62f7297d-6126-40bc-b5b4-54d01f6cb255","Type":"ContainerDied","Data":"ba0a125b2578dcf380c1db74d4961e4b4066ac035b5e84924fe75742a8bb9c6d"} Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.391707 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba0a125b2578dcf380c1db74d4961e4b4066ac035b5e84924fe75742a8bb9c6d" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.391850 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jchcv" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.399691 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dzs7v" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.406158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dzs7v" event={"ID":"65e38160-7267-4499-85f3-e05aeff196dc","Type":"ContainerDied","Data":"c9ecb86772e5920785aaa0be13b277fc995c932d9d62f5bba2a170f73af4bf7e"} Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.406253 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9ecb86772e5920785aaa0be13b277fc995c932d9d62f5bba2a170f73af4bf7e" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.552971 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.569247 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.569306 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.621109 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.644456 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.668537 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-scripts\") pod \"c1571922-8d74-4fb5-bf86-1093938b554d\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.668609 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-config-data\") pod \"c1571922-8d74-4fb5-bf86-1093938b554d\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.668697 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-log-httpd\") pod \"c1571922-8d74-4fb5-bf86-1093938b554d\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.668754 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-sg-core-conf-yaml\") pod \"c1571922-8d74-4fb5-bf86-1093938b554d\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.668831 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-combined-ca-bundle\") pod \"c1571922-8d74-4fb5-bf86-1093938b554d\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.668877 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-run-httpd\") pod \"c1571922-8d74-4fb5-bf86-1093938b554d\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.668911 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsc57\" (UniqueName: \"kubernetes.io/projected/c1571922-8d74-4fb5-bf86-1093938b554d-kube-api-access-dsc57\") pod \"c1571922-8d74-4fb5-bf86-1093938b554d\" (UID: \"c1571922-8d74-4fb5-bf86-1093938b554d\") " Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.672851 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1571922-8d74-4fb5-bf86-1093938b554d" (UID: "c1571922-8d74-4fb5-bf86-1093938b554d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.673006 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1571922-8d74-4fb5-bf86-1093938b554d" (UID: "c1571922-8d74-4fb5-bf86-1093938b554d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.678452 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-scripts" (OuterVolumeSpecName: "scripts") pod "c1571922-8d74-4fb5-bf86-1093938b554d" (UID: "c1571922-8d74-4fb5-bf86-1093938b554d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.678628 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1571922-8d74-4fb5-bf86-1093938b554d-kube-api-access-dsc57" (OuterVolumeSpecName: "kube-api-access-dsc57") pod "c1571922-8d74-4fb5-bf86-1093938b554d" (UID: "c1571922-8d74-4fb5-bf86-1093938b554d"). InnerVolumeSpecName "kube-api-access-dsc57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.726646 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1571922-8d74-4fb5-bf86-1093938b554d" (UID: "c1571922-8d74-4fb5-bf86-1093938b554d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.773195 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.773232 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.773242 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1571922-8d74-4fb5-bf86-1093938b554d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.773251 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsc57\" (UniqueName: \"kubernetes.io/projected/c1571922-8d74-4fb5-bf86-1093938b554d-kube-api-access-dsc57\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.773260 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.839901 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1571922-8d74-4fb5-bf86-1093938b554d" (UID: "c1571922-8d74-4fb5-bf86-1093938b554d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.879478 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.981607 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-config-data" (OuterVolumeSpecName: "config-data") pod "c1571922-8d74-4fb5-bf86-1093938b554d" (UID: "c1571922-8d74-4fb5-bf86-1093938b554d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.984253 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1571922-8d74-4fb5-bf86-1093938b554d-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:14 crc kubenswrapper[4982]: I0224 15:15:14.991347 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.087691 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgmgn\" (UniqueName: \"kubernetes.io/projected/0517f81d-f724-4803-9e98-85d228c39b2f-kube-api-access-cgmgn\") pod \"0517f81d-f724-4803-9e98-85d228c39b2f\" (UID: \"0517f81d-f724-4803-9e98-85d228c39b2f\") " Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.087791 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0517f81d-f724-4803-9e98-85d228c39b2f-operator-scripts\") pod \"0517f81d-f724-4803-9e98-85d228c39b2f\" (UID: \"0517f81d-f724-4803-9e98-85d228c39b2f\") " Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.089280 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0517f81d-f724-4803-9e98-85d228c39b2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0517f81d-f724-4803-9e98-85d228c39b2f" (UID: "0517f81d-f724-4803-9e98-85d228c39b2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.095373 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0517f81d-f724-4803-9e98-85d228c39b2f-kube-api-access-cgmgn" (OuterVolumeSpecName: "kube-api-access-cgmgn") pod "0517f81d-f724-4803-9e98-85d228c39b2f" (UID: "0517f81d-f724-4803-9e98-85d228c39b2f"). InnerVolumeSpecName "kube-api-access-cgmgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.189694 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgmgn\" (UniqueName: \"kubernetes.io/projected/0517f81d-f724-4803-9e98-85d228c39b2f-kube-api-access-cgmgn\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.189941 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0517f81d-f724-4803-9e98-85d228c39b2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.412326 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"62fed6a9-a36d-485f-bedc-cb54f1dad363","Type":"ContainerStarted","Data":"d027597214b242c87c3b52a0c1bce5f00d1b57b4e4f531ea68ccea4f17275ed0"} Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.418624 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1571922-8d74-4fb5-bf86-1093938b554d","Type":"ContainerDied","Data":"497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c"} Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.418647 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.418686 4982 scope.go:117] "RemoveContainer" containerID="d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.426160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1073-account-create-update-75xhn" event={"ID":"0517f81d-f724-4803-9e98-85d228c39b2f","Type":"ContainerDied","Data":"72b5515d02a0d818e1d4d55e1a209ce6f8723188bc0b42ea1c3e70105648c2e0"} Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.426240 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b5515d02a0d818e1d4d55e1a209ce6f8723188bc0b42ea1c3e70105648c2e0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.426267 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.426283 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.426366 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1073-account-create-update-75xhn" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.441158 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.441134217 podStartE2EDuration="5.441134217s" podCreationTimestamp="2026-02-24 15:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:15:15.436835342 +0000 UTC m=+1577.055893835" watchObservedRunningTime="2026-02-24 15:15:15.441134217 +0000 UTC m=+1577.060192710" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.474738 4982 scope.go:117] "RemoveContainer" containerID="2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.492135 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.512270 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.523049 4982 scope.go:117] "RemoveContainer" containerID="2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.532395 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533002 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="sg-core" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533025 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="sg-core" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533038 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="proxy-httpd" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533046 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="proxy-httpd" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533058 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdcc0e1-05fe-45a5-929e-eba045439850" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533067 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdcc0e1-05fe-45a5-929e-eba045439850" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533098 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0517f81d-f724-4803-9e98-85d228c39b2f" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533105 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0517f81d-f724-4803-9e98-85d228c39b2f" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533122 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e38160-7267-4499-85f3-e05aeff196dc" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533129 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e38160-7267-4499-85f3-e05aeff196dc" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533145 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="ceilometer-central-agent" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533153 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="ceilometer-central-agent" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533165 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="ceilometer-notification-agent" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533173 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="ceilometer-notification-agent" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533186 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df39a573-272f-4a7b-a6f8-751f8b2c0dd9" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533193 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="df39a573-272f-4a7b-a6f8-751f8b2c0dd9" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533203 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f7297d-6126-40bc-b5b4-54d01f6cb255" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533210 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f7297d-6126-40bc-b5b4-54d01f6cb255" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: E0224 15:15:15.533235 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30138906-e9e1-46d1-87e1-fd842efdf3ed" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533242 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="30138906-e9e1-46d1-87e1-fd842efdf3ed" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533510 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="ceilometer-central-agent" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533533 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e38160-7267-4499-85f3-e05aeff196dc" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533546 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="ceilometer-notification-agent" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533556 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bdcc0e1-05fe-45a5-929e-eba045439850" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533572 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="df39a573-272f-4a7b-a6f8-751f8b2c0dd9" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533592 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f7297d-6126-40bc-b5b4-54d01f6cb255" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533606 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="proxy-httpd" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533624 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" containerName="sg-core" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533635 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0517f81d-f724-4803-9e98-85d228c39b2f" containerName="mariadb-account-create-update" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.533648 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="30138906-e9e1-46d1-87e1-fd842efdf3ed" containerName="mariadb-database-create" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.544168 4982 scope.go:117] "RemoveContainer" containerID="dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.547009 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.551282 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.551359 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.553032 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.713281 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-run-httpd\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.713340 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-log-httpd\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.713390 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.713878 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.713980 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-scripts\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.714018 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vwt\" (UniqueName: \"kubernetes.io/projected/9b450650-8010-4373-9fa1-0c68c3fe9f95-kube-api-access-68vwt\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.714059 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-config-data\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.815840 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.816008 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.816054 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-scripts\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.816092 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68vwt\" (UniqueName: \"kubernetes.io/projected/9b450650-8010-4373-9fa1-0c68c3fe9f95-kube-api-access-68vwt\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.816135 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-config-data\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.816226 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-run-httpd\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.816260 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-log-httpd\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.816789 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-log-httpd\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.817044 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-run-httpd\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.820886 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.821250 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-scripts\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.821867 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.833170 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-config-data\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.835547 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68vwt\" (UniqueName: \"kubernetes.io/projected/9b450650-8010-4373-9fa1-0c68c3fe9f95-kube-api-access-68vwt\") pod \"ceilometer-0\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " pod="openstack/ceilometer-0" Feb 24 15:15:15 crc kubenswrapper[4982]: I0224 15:15:15.881901 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:16 crc kubenswrapper[4982]: I0224 15:15:16.438951 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:17 crc kubenswrapper[4982]: I0224 15:15:17.170173 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1571922-8d74-4fb5-bf86-1093938b554d" path="/var/lib/kubelet/pods/c1571922-8d74-4fb5-bf86-1093938b554d/volumes" Feb 24 15:15:17 crc kubenswrapper[4982]: I0224 15:15:17.454516 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 15:15:17 crc kubenswrapper[4982]: I0224 15:15:17.454541 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 15:15:17 crc kubenswrapper[4982]: I0224 15:15:17.455559 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerStarted","Data":"8bb10ed17ab35b8a2976f6a76cc830e03cc7d342b549153ccf45cae4baf58beb"} Feb 24 15:15:17 crc kubenswrapper[4982]: I0224 15:15:17.455624 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerStarted","Data":"1d2bd6a127384ea5817f02ef03b93b37c9c23e7c8170aa44e6ba9eeb9c13e6f3"} Feb 24 15:15:18 crc kubenswrapper[4982]: I0224 15:15:18.465099 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerStarted","Data":"1afebfc18e7505b60f6d36ccedf1a491647cff537fc115fc9776a58662031123"} Feb 24 15:15:18 crc kubenswrapper[4982]: I0224 15:15:18.894322 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gp5wp"] Feb 24 15:15:18 crc kubenswrapper[4982]: I0224 15:15:18.895871 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:18 crc kubenswrapper[4982]: I0224 15:15:18.899732 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8zmkr" Feb 24 15:15:18 crc kubenswrapper[4982]: I0224 15:15:18.899899 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 24 15:15:18 crc kubenswrapper[4982]: I0224 15:15:18.900011 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 24 15:15:18 crc kubenswrapper[4982]: I0224 15:15:18.916089 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gp5wp"] Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.019841 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-scripts\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.020326 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-config-data\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.020427 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6cj\" (UniqueName: \"kubernetes.io/projected/a27d742e-50bb-470c-a3af-8bd17beaca37-kube-api-access-th6cj\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.020553 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.122518 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-scripts\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.122591 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-config-data\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.122634 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6cj\" (UniqueName: \"kubernetes.io/projected/a27d742e-50bb-470c-a3af-8bd17beaca37-kube-api-access-th6cj\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.122669 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.127194 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.128334 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-config-data\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.130682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-scripts\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.141368 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6cj\" (UniqueName: \"kubernetes.io/projected/a27d742e-50bb-470c-a3af-8bd17beaca37-kube-api-access-th6cj\") pod \"nova-cell0-conductor-db-sync-gp5wp\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.312389 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.548611 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerStarted","Data":"4e109b456b94f33df3a346ea88b06c9bc0a897b3023c2b972935448fca312c78"} Feb 24 15:15:19 crc kubenswrapper[4982]: I0224 15:15:19.930364 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gp5wp"] Feb 24 15:15:20 crc kubenswrapper[4982]: I0224 15:15:20.577134 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" event={"ID":"a27d742e-50bb-470c-a3af-8bd17beaca37","Type":"ContainerStarted","Data":"efdceddd97b8763bf2ec549048e6221468d33d1c4916f39f0e786d22134406fe"} Feb 24 15:15:20 crc kubenswrapper[4982]: I0224 15:15:20.608937 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 24 15:15:20 crc kubenswrapper[4982]: I0224 15:15:20.609048 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 15:15:20 crc kubenswrapper[4982]: I0224 15:15:20.689549 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 24 15:15:21 crc kubenswrapper[4982]: I0224 15:15:21.591118 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerStarted","Data":"03a639862a4f30cdb36237d521a9f2daf16da272a240ea1226d7564a90650eda"} Feb 24 15:15:21 crc kubenswrapper[4982]: I0224 15:15:21.628461 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.265532683 podStartE2EDuration="6.628439107s" podCreationTimestamp="2026-02-24 15:15:15 +0000 UTC" firstStartedPulling="2026-02-24 15:15:16.438607397 +0000 UTC m=+1578.057665890" lastFinishedPulling="2026-02-24 15:15:20.801513821 +0000 UTC m=+1582.420572314" observedRunningTime="2026-02-24 15:15:21.61923449 +0000 UTC m=+1583.238292983" watchObservedRunningTime="2026-02-24 15:15:21.628439107 +0000 UTC m=+1583.247497600" Feb 24 15:15:21 crc kubenswrapper[4982]: I0224 15:15:21.766639 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:21 crc kubenswrapper[4982]: I0224 15:15:21.768126 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:21 crc kubenswrapper[4982]: I0224 15:15:21.809785 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:21 crc kubenswrapper[4982]: I0224 15:15:21.823009 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.284733 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fjv78"] Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.287349 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.299269 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fjv78"] Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.425998 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-utilities\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.426080 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-catalog-content\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.426338 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hds9d\" (UniqueName: \"kubernetes.io/projected/29128286-55dc-4660-915e-21aed6bacdbe-kube-api-access-hds9d\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.529213 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hds9d\" (UniqueName: \"kubernetes.io/projected/29128286-55dc-4660-915e-21aed6bacdbe-kube-api-access-hds9d\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.529426 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-utilities\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.529469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-catalog-content\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.529953 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-utilities\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.529991 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-catalog-content\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.554397 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hds9d\" (UniqueName: \"kubernetes.io/projected/29128286-55dc-4660-915e-21aed6bacdbe-kube-api-access-hds9d\") pod \"community-operators-fjv78\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.607623 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.607682 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.607695 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:22 crc kubenswrapper[4982]: I0224 15:15:22.622433 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:23 crc kubenswrapper[4982]: I0224 15:15:23.267717 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fjv78"] Feb 24 15:15:23 crc kubenswrapper[4982]: I0224 15:15:23.620112 4982 generic.go:334] "Generic (PLEG): container finished" podID="29128286-55dc-4660-915e-21aed6bacdbe" containerID="4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793" exitCode=0 Feb 24 15:15:23 crc kubenswrapper[4982]: I0224 15:15:23.620353 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjv78" event={"ID":"29128286-55dc-4660-915e-21aed6bacdbe","Type":"ContainerDied","Data":"4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793"} Feb 24 15:15:23 crc kubenswrapper[4982]: I0224 15:15:23.620641 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjv78" event={"ID":"29128286-55dc-4660-915e-21aed6bacdbe","Type":"ContainerStarted","Data":"3a1403cbad658c763f91e96e04ad78a71808f25955bb5750a26f704b31638fef"} Feb 24 15:15:25 crc kubenswrapper[4982]: I0224 15:15:25.042667 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:25 crc kubenswrapper[4982]: I0224 15:15:25.043047 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 15:15:25 crc kubenswrapper[4982]: I0224 15:15:25.068640 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 24 15:15:31 crc kubenswrapper[4982]: I0224 15:15:31.742433 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" event={"ID":"a27d742e-50bb-470c-a3af-8bd17beaca37","Type":"ContainerStarted","Data":"be0111c31d3bd8f073bf5f4d422c284b5e81874e1ff64fe8bf13bdb697ecc77a"} Feb 24 15:15:31 crc kubenswrapper[4982]: I0224 15:15:31.747013 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjv78" event={"ID":"29128286-55dc-4660-915e-21aed6bacdbe","Type":"ContainerStarted","Data":"53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4"} Feb 24 15:15:31 crc kubenswrapper[4982]: I0224 15:15:31.775462 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" podStartSLOduration=2.835832661 podStartE2EDuration="13.775443998s" podCreationTimestamp="2026-02-24 15:15:18 +0000 UTC" firstStartedPulling="2026-02-24 15:15:19.951917876 +0000 UTC m=+1581.570976369" lastFinishedPulling="2026-02-24 15:15:30.891529193 +0000 UTC m=+1592.510587706" observedRunningTime="2026-02-24 15:15:31.760117207 +0000 UTC m=+1593.379175720" watchObservedRunningTime="2026-02-24 15:15:31.775443998 +0000 UTC m=+1593.394502511" Feb 24 15:15:36 crc kubenswrapper[4982]: I0224 15:15:36.819467 4982 generic.go:334] "Generic (PLEG): container finished" podID="29128286-55dc-4660-915e-21aed6bacdbe" containerID="53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4" exitCode=0 Feb 24 15:15:36 crc kubenswrapper[4982]: I0224 15:15:36.819565 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjv78" event={"ID":"29128286-55dc-4660-915e-21aed6bacdbe","Type":"ContainerDied","Data":"53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4"} Feb 24 15:15:37 crc kubenswrapper[4982]: I0224 15:15:37.834131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjv78" event={"ID":"29128286-55dc-4660-915e-21aed6bacdbe","Type":"ContainerStarted","Data":"73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648"} Feb 24 15:15:37 crc kubenswrapper[4982]: I0224 15:15:37.856591 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fjv78" podStartSLOduration=2.100114092 podStartE2EDuration="15.85657507s" podCreationTimestamp="2026-02-24 15:15:22 +0000 UTC" firstStartedPulling="2026-02-24 15:15:23.622820332 +0000 UTC m=+1585.241878825" lastFinishedPulling="2026-02-24 15:15:37.37928131 +0000 UTC m=+1598.998339803" observedRunningTime="2026-02-24 15:15:37.854143265 +0000 UTC m=+1599.473201758" watchObservedRunningTime="2026-02-24 15:15:37.85657507 +0000 UTC m=+1599.475633563" Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.204011 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.207476 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="ceilometer-central-agent" containerID="cri-o://8bb10ed17ab35b8a2976f6a76cc830e03cc7d342b549153ccf45cae4baf58beb" gracePeriod=30 Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.207543 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="sg-core" containerID="cri-o://4e109b456b94f33df3a346ea88b06c9bc0a897b3023c2b972935448fca312c78" gracePeriod=30 Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.207600 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="proxy-httpd" containerID="cri-o://03a639862a4f30cdb36237d521a9f2daf16da272a240ea1226d7564a90650eda" gracePeriod=30 Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.207687 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="ceilometer-notification-agent" containerID="cri-o://1afebfc18e7505b60f6d36ccedf1a491647cff537fc115fc9776a58662031123" gracePeriod=30 Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.222119 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.237:3000/\": EOF" Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.864422 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerID="03a639862a4f30cdb36237d521a9f2daf16da272a240ea1226d7564a90650eda" exitCode=0 Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.864949 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerID="4e109b456b94f33df3a346ea88b06c9bc0a897b3023c2b972935448fca312c78" exitCode=2 Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.864961 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerID="8bb10ed17ab35b8a2976f6a76cc830e03cc7d342b549153ccf45cae4baf58beb" exitCode=0 Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.864507 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerDied","Data":"03a639862a4f30cdb36237d521a9f2daf16da272a240ea1226d7564a90650eda"} Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.865072 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerDied","Data":"4e109b456b94f33df3a346ea88b06c9bc0a897b3023c2b972935448fca312c78"} Feb 24 15:15:39 crc kubenswrapper[4982]: I0224 15:15:39.865121 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerDied","Data":"8bb10ed17ab35b8a2976f6a76cc830e03cc7d342b549153ccf45cae4baf58beb"} Feb 24 15:15:41 crc kubenswrapper[4982]: I0224 15:15:41.921370 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerID="1afebfc18e7505b60f6d36ccedf1a491647cff537fc115fc9776a58662031123" exitCode=0 Feb 24 15:15:41 crc kubenswrapper[4982]: I0224 15:15:41.921928 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerDied","Data":"1afebfc18e7505b60f6d36ccedf1a491647cff537fc115fc9776a58662031123"} Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.090584 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.169319 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-log-httpd\") pod \"9b450650-8010-4373-9fa1-0c68c3fe9f95\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.169408 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-scripts\") pod \"9b450650-8010-4373-9fa1-0c68c3fe9f95\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.169708 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-config-data\") pod \"9b450650-8010-4373-9fa1-0c68c3fe9f95\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.169745 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-combined-ca-bundle\") pod \"9b450650-8010-4373-9fa1-0c68c3fe9f95\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.169792 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68vwt\" (UniqueName: \"kubernetes.io/projected/9b450650-8010-4373-9fa1-0c68c3fe9f95-kube-api-access-68vwt\") pod \"9b450650-8010-4373-9fa1-0c68c3fe9f95\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.169817 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-sg-core-conf-yaml\") pod \"9b450650-8010-4373-9fa1-0c68c3fe9f95\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.169868 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-run-httpd\") pod \"9b450650-8010-4373-9fa1-0c68c3fe9f95\" (UID: \"9b450650-8010-4373-9fa1-0c68c3fe9f95\") " Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.170071 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9b450650-8010-4373-9fa1-0c68c3fe9f95" (UID: "9b450650-8010-4373-9fa1-0c68c3fe9f95"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.170422 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.171795 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9b450650-8010-4373-9fa1-0c68c3fe9f95" (UID: "9b450650-8010-4373-9fa1-0c68c3fe9f95"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.177141 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b450650-8010-4373-9fa1-0c68c3fe9f95-kube-api-access-68vwt" (OuterVolumeSpecName: "kube-api-access-68vwt") pod "9b450650-8010-4373-9fa1-0c68c3fe9f95" (UID: "9b450650-8010-4373-9fa1-0c68c3fe9f95"). InnerVolumeSpecName "kube-api-access-68vwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.180781 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-scripts" (OuterVolumeSpecName: "scripts") pod "9b450650-8010-4373-9fa1-0c68c3fe9f95" (UID: "9b450650-8010-4373-9fa1-0c68c3fe9f95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.236721 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9b450650-8010-4373-9fa1-0c68c3fe9f95" (UID: "9b450650-8010-4373-9fa1-0c68c3fe9f95"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.276488 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.277035 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68vwt\" (UniqueName: \"kubernetes.io/projected/9b450650-8010-4373-9fa1-0c68c3fe9f95-kube-api-access-68vwt\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.277053 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.277063 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b450650-8010-4373-9fa1-0c68c3fe9f95-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.294278 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b450650-8010-4373-9fa1-0c68c3fe9f95" (UID: "9b450650-8010-4373-9fa1-0c68c3fe9f95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.313528 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-config-data" (OuterVolumeSpecName: "config-data") pod "9b450650-8010-4373-9fa1-0c68c3fe9f95" (UID: "9b450650-8010-4373-9fa1-0c68c3fe9f95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.379579 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.379636 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b450650-8010-4373-9fa1-0c68c3fe9f95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.622942 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.623025 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.944091 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.944401 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b450650-8010-4373-9fa1-0c68c3fe9f95","Type":"ContainerDied","Data":"1d2bd6a127384ea5817f02ef03b93b37c9c23e7c8170aa44e6ba9eeb9c13e6f3"} Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.944440 4982 scope.go:117] "RemoveContainer" containerID="03a639862a4f30cdb36237d521a9f2daf16da272a240ea1226d7564a90650eda" Feb 24 15:15:42 crc kubenswrapper[4982]: I0224 15:15:42.984132 4982 scope.go:117] "RemoveContainer" containerID="4e109b456b94f33df3a346ea88b06c9bc0a897b3023c2b972935448fca312c78" Feb 24 15:15:42 crc kubenswrapper[4982]: E0224 15:15:42.991006 4982 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9de3acd7dfbc55dc728eddccc536d23a3f9b647b21829223bf3ff930bb7f5787/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9de3acd7dfbc55dc728eddccc536d23a3f9b647b21829223bf3ff930bb7f5787/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_heat-engine-7964bbc76-m2h7r_a92127b8-25b0-4d2a-874d-91cf1acfdc79/heat-engine/0.log" to get inode usage: stat /var/log/pods/openstack_heat-engine-7964bbc76-m2h7r_a92127b8-25b0-4d2a-874d-91cf1acfdc79/heat-engine/0.log: no such file or directory Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.000247 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.018123 4982 scope.go:117] "RemoveContainer" containerID="1afebfc18e7505b60f6d36ccedf1a491647cff537fc115fc9776a58662031123" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.019157 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.037129 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:43 crc kubenswrapper[4982]: E0224 15:15:43.037731 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="proxy-httpd" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.037754 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="proxy-httpd" Feb 24 15:15:43 crc kubenswrapper[4982]: E0224 15:15:43.037771 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="ceilometer-central-agent" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.037780 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="ceilometer-central-agent" Feb 24 15:15:43 crc kubenswrapper[4982]: E0224 15:15:43.037824 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="sg-core" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.037834 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="sg-core" Feb 24 15:15:43 crc kubenswrapper[4982]: E0224 15:15:43.037881 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="ceilometer-notification-agent" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.037890 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="ceilometer-notification-agent" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.038180 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="sg-core" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.038208 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="ceilometer-central-agent" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.038227 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="proxy-httpd" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.038244 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" containerName="ceilometer-notification-agent" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.041243 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.051078 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.055976 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.074473 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.081214 4982 scope.go:117] "RemoveContainer" containerID="8bb10ed17ab35b8a2976f6a76cc830e03cc7d342b549153ccf45cae4baf58beb" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.099068 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.099332 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-config-data\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.099405 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-log-httpd\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.099427 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-scripts\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.099449 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-run-httpd\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.099637 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7578f\" (UniqueName: \"kubernetes.io/projected/d36c7a7a-058f-4f57-be56-3130a34c5c67-kube-api-access-7578f\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.099841 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.162229 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b450650-8010-4373-9fa1-0c68c3fe9f95" path="/var/lib/kubelet/pods/9b450650-8010-4373-9fa1-0c68c3fe9f95/volumes" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.202007 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-config-data\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.202074 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-log-httpd\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.202094 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-scripts\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.202112 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-run-httpd\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.202193 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7578f\" (UniqueName: \"kubernetes.io/projected/d36c7a7a-058f-4f57-be56-3130a34c5c67-kube-api-access-7578f\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.202353 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.202419 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.203330 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-log-httpd\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.204748 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-run-httpd\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.216726 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.216835 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-scripts\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.217041 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.217134 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-config-data\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.227992 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7578f\" (UniqueName: \"kubernetes.io/projected/d36c7a7a-058f-4f57-be56-3130a34c5c67-kube-api-access-7578f\") pod \"ceilometer-0\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.375719 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.690653 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fjv78" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="registry-server" probeResult="failure" output=< Feb 24 15:15:43 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:15:43 crc kubenswrapper[4982]: > Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.896139 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:43 crc kubenswrapper[4982]: I0224 15:15:43.967183 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerStarted","Data":"22bf10557ac737c18049b100b86f2b63e988d91e581c8cde3167c02d15c42335"} Feb 24 15:15:44 crc kubenswrapper[4982]: I0224 15:15:44.981685 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerStarted","Data":"f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896"} Feb 24 15:15:45 crc kubenswrapper[4982]: E0224 15:15:45.293845 4982 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c1ac6a93d1808cdaa289787e087945cfa7fcf27100a385ac34d13d092cf24fbf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c1ac6a93d1808cdaa289787e087945cfa7fcf27100a385ac34d13d092cf24fbf/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-default-internal-api-0_ee0ec3ad-df19-4e19-a288-d6ca32779160/glance-log/0.log" to get inode usage: stat /var/log/pods/openstack_glance-default-internal-api-0_ee0ec3ad-df19-4e19-a288-d6ca32779160/glance-log/0.log: no such file or directory Feb 24 15:15:45 crc kubenswrapper[4982]: I0224 15:15:45.994268 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerStarted","Data":"e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b"} Feb 24 15:15:47 crc kubenswrapper[4982]: I0224 15:15:47.009312 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerStarted","Data":"d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0"} Feb 24 15:15:47 crc kubenswrapper[4982]: E0224 15:15:47.349682 4982 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/805be59bca4aec6f6a60f7f3c774b5151947c36b6f2d7fdd4940bc6263558690/diff" to get inode usage: stat /var/lib/containers/storage/overlay/805be59bca4aec6f6a60f7f3c774b5151947c36b6f2d7fdd4940bc6263558690/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-default-internal-api-0_ee0ec3ad-df19-4e19-a288-d6ca32779160/glance-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_glance-default-internal-api-0_ee0ec3ad-df19-4e19-a288-d6ca32779160/glance-httpd/0.log: no such file or directory Feb 24 15:15:49 crc kubenswrapper[4982]: W0224 15:15:49.425985 4982 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f7297d_6126_40bc_b5b4_54d01f6cb255.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f7297d_6126_40bc_b5b4_54d01f6cb255.slice: no such file or directory Feb 24 15:15:49 crc kubenswrapper[4982]: W0224 15:15:49.426400 4982 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30138906_e9e1_46d1_87e1_fd842efdf3ed.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30138906_e9e1_46d1_87e1_fd842efdf3ed.slice: no such file or directory Feb 24 15:15:49 crc kubenswrapper[4982]: W0224 15:15:49.426432 4982 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65e38160_7267_4499_85f3_e05aeff196dc.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65e38160_7267_4499_85f3_e05aeff196dc.slice: no such file or directory Feb 24 15:15:49 crc kubenswrapper[4982]: W0224 15:15:49.426487 4982 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf39a573_272f_4a7b_a6f8_751f8b2c0dd9.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf39a573_272f_4a7b_a6f8_751f8b2c0dd9.slice: no such file or directory Feb 24 15:15:49 crc kubenswrapper[4982]: W0224 15:15:49.426533 4982 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0517f81d_f724_4803_9e98_85d228c39b2f.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0517f81d_f724_4803_9e98_85d228c39b2f.slice: no such file or directory Feb 24 15:15:49 crc kubenswrapper[4982]: W0224 15:15:49.426558 4982 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bdcc0e1_05fe_45a5_929e_eba045439850.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bdcc0e1_05fe_45a5_929e_eba045439850.slice: no such file or directory Feb 24 15:15:49 crc kubenswrapper[4982]: W0224 15:15:49.440003 4982 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b450650_8010_4373_9fa1_0c68c3fe9f95.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b450650_8010_4373_9fa1_0c68c3fe9f95.slice: no such file or directory Feb 24 15:15:49 crc kubenswrapper[4982]: E0224 15:15:49.522763 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a3d588_5808_4049_8f23_d7eb8ac84839.slice/crio-8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-conmon-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:15:49 crc kubenswrapper[4982]: E0224 15:15:49.523716 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a3d588_5808_4049_8f23_d7eb8ac84839.slice/crio-conmon-8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a3d588_5808_4049_8f23_d7eb8ac84839.slice/crio-8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-conmon-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:15:49 crc kubenswrapper[4982]: E0224 15:15:49.523816 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a3d588_5808_4049_8f23_d7eb8ac84839.slice/crio-conmon-8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a3d588_5808_4049_8f23_d7eb8ac84839.slice/crio-8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-conmon-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d\": RecentStats: unable to find data in memory cache]" Feb 24 15:15:49 crc kubenswrapper[4982]: E0224 15:15:49.525606 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-conmon-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5deaccac_c61e_42c6_8628_2dc559076fa5.slice/crio-conmon-ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5deaccac_c61e_42c6_8628_2dc559076fa5.slice/crio-ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:15:49 crc kubenswrapper[4982]: E0224 15:15:49.530184 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-conmon-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-d417ae6b4ede52d8a8cf0654a5c9882818b35e113f30274b5a83df69da744556.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-0bb2a4a662097880235ef39cbb673fa0a78c383884224c010507891f368e858d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-038896d4ca5f8a0b8f04e22761742279ae01b3992e28aace81f93f80dc208424\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5deaccac_c61e_42c6_8628_2dc559076fa5.slice/crio-ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-49d20b5cf9ad045bfe983b503280a6351eb765823191dc7fc3c6874520a232e9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a3d588_5808_4049_8f23_d7eb8ac84839.slice/crio-conmon-8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-497af5b1647d09a0c07895a2e17d59aa84c48312ea6a759bfd2887c0d48c388c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-conmon-06799dfd02afadbaa318c6f0ef4fa23b69c071b0c90440a86c662f5501c1c25d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-2081cc286e72c3b2b3be2145f4478ae909f199281d8953ace4a842a309e688c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-2ec885361fd3c51a5e1d64ed341edbd8ee1583385b3b98c3ce284d7de89b21a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded67ff00_778a_4253_8f39_52d8ecbcc41b.slice/crio-8ac3288a6b70494d2b9bee2f2470708478c00066db4e7f9649933e5e8e58efd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5deaccac_c61e_42c6_8628_2dc559076fa5.slice/crio-conmon-ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-8e9fe2d99c694de52cfa2d5fd8b56f1f8c5dcaa3ad9815bb3e7e8bb2beeeec73.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0ec3ad_df19_4e19_a288_d6ca32779160.slice/crio-a20cafd746d8245afb045e28f46f63253d47f8a9c17f2bba4fb95a3ec76b02ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1571922_8d74_4fb5_bf86_1093938b554d.slice/crio-conmon-dc6819d4ebd1823186eaee2f819a2e45c471dea328addb0041b93210dce47bfe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda92127b8_25b0_4d2a_874d_91cf1acfdc79.slice/crio-conmon-b58a8f4076a603761d14ab0b20e27ab481ba9ee93fec108d2b333c3fe57d49ae.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.056727 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerStarted","Data":"3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38"} Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.057438 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.060917 4982 generic.go:334] "Generic (PLEG): container finished" podID="5deaccac-c61e-42c6-8628-2dc559076fa5" containerID="ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a" exitCode=137 Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.061075 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" event={"ID":"5deaccac-c61e-42c6-8628-2dc559076fa5","Type":"ContainerDied","Data":"ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a"} Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.061165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" event={"ID":"5deaccac-c61e-42c6-8628-2dc559076fa5","Type":"ContainerDied","Data":"cda93b5e01cdbe55147b9c66035710b4c7414fc261e77d0a4eaad5888723b15b"} Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.061224 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda93b5e01cdbe55147b9c66035710b4c7414fc261e77d0a4eaad5888723b15b" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.062612 4982 generic.go:334] "Generic (PLEG): container finished" podID="57a3d588-5808-4049-8f23-d7eb8ac84839" containerID="8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5" exitCode=137 Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.062780 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75b5d95788-lvg8w" event={"ID":"57a3d588-5808-4049-8f23-d7eb8ac84839","Type":"ContainerDied","Data":"8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5"} Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.062862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75b5d95788-lvg8w" event={"ID":"57a3d588-5808-4049-8f23-d7eb8ac84839","Type":"ContainerDied","Data":"7d409bc52479c0cd9b0b20de62e1a25bc4f053fe5bfcab02ce419b38b86ea36e"} Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.062932 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d409bc52479c0cd9b0b20de62e1a25bc4f053fe5bfcab02ce419b38b86ea36e" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.092902 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.0967259560000002 podStartE2EDuration="8.092879522s" podCreationTimestamp="2026-02-24 15:15:42 +0000 UTC" firstStartedPulling="2026-02-24 15:15:43.893889118 +0000 UTC m=+1605.512947611" lastFinishedPulling="2026-02-24 15:15:48.890042644 +0000 UTC m=+1610.509101177" observedRunningTime="2026-02-24 15:15:50.079915074 +0000 UTC m=+1611.698973587" watchObservedRunningTime="2026-02-24 15:15:50.092879522 +0000 UTC m=+1611.711938015" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.099201 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.109314 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.220187 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8svh6\" (UniqueName: \"kubernetes.io/projected/57a3d588-5808-4049-8f23-d7eb8ac84839-kube-api-access-8svh6\") pod \"57a3d588-5808-4049-8f23-d7eb8ac84839\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.220525 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data-custom\") pod \"57a3d588-5808-4049-8f23-d7eb8ac84839\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.220722 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bzwk\" (UniqueName: \"kubernetes.io/projected/5deaccac-c61e-42c6-8628-2dc559076fa5-kube-api-access-5bzwk\") pod \"5deaccac-c61e-42c6-8628-2dc559076fa5\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.220808 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data\") pod \"5deaccac-c61e-42c6-8628-2dc559076fa5\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.220877 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-combined-ca-bundle\") pod \"57a3d588-5808-4049-8f23-d7eb8ac84839\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.221249 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data-custom\") pod \"5deaccac-c61e-42c6-8628-2dc559076fa5\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.221378 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-combined-ca-bundle\") pod \"5deaccac-c61e-42c6-8628-2dc559076fa5\" (UID: \"5deaccac-c61e-42c6-8628-2dc559076fa5\") " Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.221489 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data\") pod \"57a3d588-5808-4049-8f23-d7eb8ac84839\" (UID: \"57a3d588-5808-4049-8f23-d7eb8ac84839\") " Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.231004 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5deaccac-c61e-42c6-8628-2dc559076fa5-kube-api-access-5bzwk" (OuterVolumeSpecName: "kube-api-access-5bzwk") pod "5deaccac-c61e-42c6-8628-2dc559076fa5" (UID: "5deaccac-c61e-42c6-8628-2dc559076fa5"). InnerVolumeSpecName "kube-api-access-5bzwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.231670 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a3d588-5808-4049-8f23-d7eb8ac84839-kube-api-access-8svh6" (OuterVolumeSpecName: "kube-api-access-8svh6") pod "57a3d588-5808-4049-8f23-d7eb8ac84839" (UID: "57a3d588-5808-4049-8f23-d7eb8ac84839"). InnerVolumeSpecName "kube-api-access-8svh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.243715 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5deaccac-c61e-42c6-8628-2dc559076fa5" (UID: "5deaccac-c61e-42c6-8628-2dc559076fa5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.244697 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57a3d588-5808-4049-8f23-d7eb8ac84839" (UID: "57a3d588-5808-4049-8f23-d7eb8ac84839"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.267672 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57a3d588-5808-4049-8f23-d7eb8ac84839" (UID: "57a3d588-5808-4049-8f23-d7eb8ac84839"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.281395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5deaccac-c61e-42c6-8628-2dc559076fa5" (UID: "5deaccac-c61e-42c6-8628-2dc559076fa5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.312117 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data" (OuterVolumeSpecName: "config-data") pod "5deaccac-c61e-42c6-8628-2dc559076fa5" (UID: "5deaccac-c61e-42c6-8628-2dc559076fa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.316538 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data" (OuterVolumeSpecName: "config-data") pod "57a3d588-5808-4049-8f23-d7eb8ac84839" (UID: "57a3d588-5808-4049-8f23-d7eb8ac84839"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.324972 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8svh6\" (UniqueName: \"kubernetes.io/projected/57a3d588-5808-4049-8f23-d7eb8ac84839-kube-api-access-8svh6\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.325035 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.325049 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bzwk\" (UniqueName: \"kubernetes.io/projected/5deaccac-c61e-42c6-8628-2dc559076fa5-kube-api-access-5bzwk\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.325061 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.325097 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.325108 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.325119 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5deaccac-c61e-42c6-8628-2dc559076fa5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:50 crc kubenswrapper[4982]: I0224 15:15:50.325130 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a3d588-5808-4049-8f23-d7eb8ac84839-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:51 crc kubenswrapper[4982]: I0224 15:15:51.079459 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d5f4785d-rlf88" Feb 24 15:15:51 crc kubenswrapper[4982]: I0224 15:15:51.079566 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75b5d95788-lvg8w" Feb 24 15:15:51 crc kubenswrapper[4982]: I0224 15:15:51.138242 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-75b5d95788-lvg8w"] Feb 24 15:15:51 crc kubenswrapper[4982]: I0224 15:15:51.163368 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-75b5d95788-lvg8w"] Feb 24 15:15:51 crc kubenswrapper[4982]: I0224 15:15:51.164300 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74d5f4785d-rlf88"] Feb 24 15:15:51 crc kubenswrapper[4982]: I0224 15:15:51.180229 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74d5f4785d-rlf88"] Feb 24 15:15:51 crc kubenswrapper[4982]: I0224 15:15:51.767400 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:52 crc kubenswrapper[4982]: I0224 15:15:52.094342 4982 generic.go:334] "Generic (PLEG): container finished" podID="a27d742e-50bb-470c-a3af-8bd17beaca37" containerID="be0111c31d3bd8f073bf5f4d422c284b5e81874e1ff64fe8bf13bdb697ecc77a" exitCode=0 Feb 24 15:15:52 crc kubenswrapper[4982]: I0224 15:15:52.094420 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" event={"ID":"a27d742e-50bb-470c-a3af-8bd17beaca37","Type":"ContainerDied","Data":"be0111c31d3bd8f073bf5f4d422c284b5e81874e1ff64fe8bf13bdb697ecc77a"} Feb 24 15:15:52 crc kubenswrapper[4982]: I0224 15:15:52.094684 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="ceilometer-central-agent" containerID="cri-o://f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896" gracePeriod=30 Feb 24 15:15:52 crc kubenswrapper[4982]: I0224 15:15:52.094807 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="ceilometer-notification-agent" containerID="cri-o://e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b" gracePeriod=30 Feb 24 15:15:52 crc kubenswrapper[4982]: I0224 15:15:52.094897 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="sg-core" containerID="cri-o://d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0" gracePeriod=30 Feb 24 15:15:52 crc kubenswrapper[4982]: I0224 15:15:52.095019 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="proxy-httpd" containerID="cri-o://3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38" gracePeriod=30 Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.110302 4982 generic.go:334] "Generic (PLEG): container finished" podID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerID="3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38" exitCode=0 Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.110594 4982 generic.go:334] "Generic (PLEG): container finished" podID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerID="d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0" exitCode=2 Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.110603 4982 generic.go:334] "Generic (PLEG): container finished" podID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerID="e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b" exitCode=0 Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.110381 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerDied","Data":"3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38"} Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.110714 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerDied","Data":"d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0"} Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.110731 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerDied","Data":"e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b"} Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.181149 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a3d588-5808-4049-8f23-d7eb8ac84839" path="/var/lib/kubelet/pods/57a3d588-5808-4049-8f23-d7eb8ac84839/volumes" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.182389 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5deaccac-c61e-42c6-8628-2dc559076fa5" path="/var/lib/kubelet/pods/5deaccac-c61e-42c6-8628-2dc559076fa5/volumes" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.677153 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.689723 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.703958 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fjv78" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="registry-server" probeResult="failure" output=< Feb 24 15:15:53 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:15:53 crc kubenswrapper[4982]: > Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.829869 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-combined-ca-bundle\") pod \"a27d742e-50bb-470c-a3af-8bd17beaca37\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.830126 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7578f\" (UniqueName: \"kubernetes.io/projected/d36c7a7a-058f-4f57-be56-3130a34c5c67-kube-api-access-7578f\") pod \"d36c7a7a-058f-4f57-be56-3130a34c5c67\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.830263 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-run-httpd\") pod \"d36c7a7a-058f-4f57-be56-3130a34c5c67\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.830452 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-config-data\") pod \"a27d742e-50bb-470c-a3af-8bd17beaca37\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.830634 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th6cj\" (UniqueName: \"kubernetes.io/projected/a27d742e-50bb-470c-a3af-8bd17beaca37-kube-api-access-th6cj\") pod \"a27d742e-50bb-470c-a3af-8bd17beaca37\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.830825 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-scripts\") pod \"a27d742e-50bb-470c-a3af-8bd17beaca37\" (UID: \"a27d742e-50bb-470c-a3af-8bd17beaca37\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.830888 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-scripts\") pod \"d36c7a7a-058f-4f57-be56-3130a34c5c67\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.830948 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-config-data\") pod \"d36c7a7a-058f-4f57-be56-3130a34c5c67\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.831013 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-sg-core-conf-yaml\") pod \"d36c7a7a-058f-4f57-be56-3130a34c5c67\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.831144 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-log-httpd\") pod \"d36c7a7a-058f-4f57-be56-3130a34c5c67\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.831212 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-combined-ca-bundle\") pod \"d36c7a7a-058f-4f57-be56-3130a34c5c67\" (UID: \"d36c7a7a-058f-4f57-be56-3130a34c5c67\") " Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.831753 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d36c7a7a-058f-4f57-be56-3130a34c5c67" (UID: "d36c7a7a-058f-4f57-be56-3130a34c5c67"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.832758 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.837113 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-scripts" (OuterVolumeSpecName: "scripts") pod "d36c7a7a-058f-4f57-be56-3130a34c5c67" (UID: "d36c7a7a-058f-4f57-be56-3130a34c5c67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.837427 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27d742e-50bb-470c-a3af-8bd17beaca37-kube-api-access-th6cj" (OuterVolumeSpecName: "kube-api-access-th6cj") pod "a27d742e-50bb-470c-a3af-8bd17beaca37" (UID: "a27d742e-50bb-470c-a3af-8bd17beaca37"). InnerVolumeSpecName "kube-api-access-th6cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.837741 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36c7a7a-058f-4f57-be56-3130a34c5c67-kube-api-access-7578f" (OuterVolumeSpecName: "kube-api-access-7578f") pod "d36c7a7a-058f-4f57-be56-3130a34c5c67" (UID: "d36c7a7a-058f-4f57-be56-3130a34c5c67"). InnerVolumeSpecName "kube-api-access-7578f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.837804 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d36c7a7a-058f-4f57-be56-3130a34c5c67" (UID: "d36c7a7a-058f-4f57-be56-3130a34c5c67"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.849394 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-scripts" (OuterVolumeSpecName: "scripts") pod "a27d742e-50bb-470c-a3af-8bd17beaca37" (UID: "a27d742e-50bb-470c-a3af-8bd17beaca37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.871645 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a27d742e-50bb-470c-a3af-8bd17beaca37" (UID: "a27d742e-50bb-470c-a3af-8bd17beaca37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.871670 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d36c7a7a-058f-4f57-be56-3130a34c5c67" (UID: "d36c7a7a-058f-4f57-be56-3130a34c5c67"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.885796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-config-data" (OuterVolumeSpecName: "config-data") pod "a27d742e-50bb-470c-a3af-8bd17beaca37" (UID: "a27d742e-50bb-470c-a3af-8bd17beaca37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.931390 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d36c7a7a-058f-4f57-be56-3130a34c5c67" (UID: "d36c7a7a-058f-4f57-be56-3130a34c5c67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937082 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th6cj\" (UniqueName: \"kubernetes.io/projected/a27d742e-50bb-470c-a3af-8bd17beaca37-kube-api-access-th6cj\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937118 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937129 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937137 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937146 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d36c7a7a-058f-4f57-be56-3130a34c5c67-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937155 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937163 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937171 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7578f\" (UniqueName: \"kubernetes.io/projected/d36c7a7a-058f-4f57-be56-3130a34c5c67-kube-api-access-7578f\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.937179 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a27d742e-50bb-470c-a3af-8bd17beaca37-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:53 crc kubenswrapper[4982]: I0224 15:15:53.987845 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-config-data" (OuterVolumeSpecName: "config-data") pod "d36c7a7a-058f-4f57-be56-3130a34c5c67" (UID: "d36c7a7a-058f-4f57-be56-3130a34c5c67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.039874 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d36c7a7a-058f-4f57-be56-3130a34c5c67-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.127123 4982 generic.go:334] "Generic (PLEG): container finished" podID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerID="f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896" exitCode=0 Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.127229 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerDied","Data":"f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896"} Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.127278 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d36c7a7a-058f-4f57-be56-3130a34c5c67","Type":"ContainerDied","Data":"22bf10557ac737c18049b100b86f2b63e988d91e581c8cde3167c02d15c42335"} Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.127305 4982 scope.go:117] "RemoveContainer" containerID="3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.127237 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.129481 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" event={"ID":"a27d742e-50bb-470c-a3af-8bd17beaca37","Type":"ContainerDied","Data":"efdceddd97b8763bf2ec549048e6221468d33d1c4916f39f0e786d22134406fe"} Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.129558 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efdceddd97b8763bf2ec549048e6221468d33d1c4916f39f0e786d22134406fe" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.129636 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gp5wp" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.184784 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.189290 4982 scope.go:117] "RemoveContainer" containerID="d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.202404 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.214432 4982 scope.go:117] "RemoveContainer" containerID="e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.227576 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.228209 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a3d588-5808-4049-8f23-d7eb8ac84839" containerName="heat-api" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228227 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a3d588-5808-4049-8f23-d7eb8ac84839" containerName="heat-api" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.228272 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5deaccac-c61e-42c6-8628-2dc559076fa5" containerName="heat-cfnapi" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228280 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5deaccac-c61e-42c6-8628-2dc559076fa5" containerName="heat-cfnapi" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.228317 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="sg-core" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228327 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="sg-core" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.228340 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="ceilometer-notification-agent" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228347 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="ceilometer-notification-agent" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.228411 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27d742e-50bb-470c-a3af-8bd17beaca37" containerName="nova-cell0-conductor-db-sync" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228423 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27d742e-50bb-470c-a3af-8bd17beaca37" containerName="nova-cell0-conductor-db-sync" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.228525 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="ceilometer-central-agent" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228540 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="ceilometer-central-agent" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.228550 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="proxy-httpd" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228560 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="proxy-httpd" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228833 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a3d588-5808-4049-8f23-d7eb8ac84839" containerName="heat-api" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228856 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="proxy-httpd" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228870 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="sg-core" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228914 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="ceilometer-central-agent" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228931 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" containerName="ceilometer-notification-agent" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228950 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27d742e-50bb-470c-a3af-8bd17beaca37" containerName="nova-cell0-conductor-db-sync" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.228965 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5deaccac-c61e-42c6-8628-2dc559076fa5" containerName="heat-cfnapi" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.235068 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.235183 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.238381 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.238698 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.262791 4982 scope.go:117] "RemoveContainer" containerID="f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.289062 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.291113 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.298090 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8zmkr" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.298373 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.301149 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.318326 4982 scope.go:117] "RemoveContainer" containerID="3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.318977 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38\": container with ID starting with 3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38 not found: ID does not exist" containerID="3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.319053 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38"} err="failed to get container status \"3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38\": rpc error: code = NotFound desc = could not find container \"3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38\": container with ID starting with 3c67d2a5a5bbd0231e37b8b2b98e4c99b76c9216ecf917d9630a7f2ead0dba38 not found: ID does not exist" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.319119 4982 scope.go:117] "RemoveContainer" containerID="d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.319753 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0\": container with ID starting with d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0 not found: ID does not exist" containerID="d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.319782 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0"} err="failed to get container status \"d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0\": rpc error: code = NotFound desc = could not find container \"d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0\": container with ID starting with d09ae7283a5867f67346792bd5ab0aa69990fd8c1d8cfcf42da2beab20b2c1e0 not found: ID does not exist" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.319829 4982 scope.go:117] "RemoveContainer" containerID="e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.320123 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b\": container with ID starting with e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b not found: ID does not exist" containerID="e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.320148 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b"} err="failed to get container status \"e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b\": rpc error: code = NotFound desc = could not find container \"e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b\": container with ID starting with e80ee2822feb627158f013e4bb49e8c6f660bf841fe5dbfdfd3c2d3cb15bee5b not found: ID does not exist" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.320161 4982 scope.go:117] "RemoveContainer" containerID="f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896" Feb 24 15:15:54 crc kubenswrapper[4982]: E0224 15:15:54.320399 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896\": container with ID starting with f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896 not found: ID does not exist" containerID="f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.320415 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896"} err="failed to get container status \"f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896\": rpc error: code = NotFound desc = could not find container \"f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896\": container with ID starting with f91c4e730ca4d7acf8b2d61cf0b6157eef5f685046744462c30d86a2cad2d896 not found: ID does not exist" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.347091 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-log-httpd\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.347585 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.347609 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.347696 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-run-httpd\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.347780 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-scripts\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.347801 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tw9s\" (UniqueName: \"kubernetes.io/projected/269117fa-6bb8-4cfa-9608-a43e8fef59a9-kube-api-access-7tw9s\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.347820 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-config-data\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450401 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802ee268-f1cc-4138-8361-c61c4b2d005a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450458 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-run-httpd\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450531 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802ee268-f1cc-4138-8361-c61c4b2d005a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450598 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-scripts\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450621 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tw9s\" (UniqueName: \"kubernetes.io/projected/269117fa-6bb8-4cfa-9608-a43e8fef59a9-kube-api-access-7tw9s\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450641 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-config-data\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450664 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kbgh\" (UniqueName: \"kubernetes.io/projected/802ee268-f1cc-4138-8361-c61c4b2d005a-kube-api-access-8kbgh\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450704 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-log-httpd\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450733 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450747 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.450999 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-run-httpd\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.451483 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-log-httpd\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.456411 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-config-data\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.457163 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.457735 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.457923 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-scripts\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.472235 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tw9s\" (UniqueName: \"kubernetes.io/projected/269117fa-6bb8-4cfa-9608-a43e8fef59a9-kube-api-access-7tw9s\") pod \"ceilometer-0\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.552550 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802ee268-f1cc-4138-8361-c61c4b2d005a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.552630 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802ee268-f1cc-4138-8361-c61c4b2d005a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.552746 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kbgh\" (UniqueName: \"kubernetes.io/projected/802ee268-f1cc-4138-8361-c61c4b2d005a-kube-api-access-8kbgh\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.556349 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/802ee268-f1cc-4138-8361-c61c4b2d005a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.556447 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/802ee268-f1cc-4138-8361-c61c4b2d005a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.564223 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.571201 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kbgh\" (UniqueName: \"kubernetes.io/projected/802ee268-f1cc-4138-8361-c61c4b2d005a-kube-api-access-8kbgh\") pod \"nova-cell0-conductor-0\" (UID: \"802ee268-f1cc-4138-8361-c61c4b2d005a\") " pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:54 crc kubenswrapper[4982]: I0224 15:15:54.614290 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:55 crc kubenswrapper[4982]: I0224 15:15:55.094746 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:15:55 crc kubenswrapper[4982]: I0224 15:15:55.164114 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36c7a7a-058f-4f57-be56-3130a34c5c67" path="/var/lib/kubelet/pods/d36c7a7a-058f-4f57-be56-3130a34c5c67/volumes" Feb 24 15:15:55 crc kubenswrapper[4982]: I0224 15:15:55.165376 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerStarted","Data":"ec8d3bdc842c7050b6ce1a79b823b403eb28a3c323faa23d863d24cc271c21ef"} Feb 24 15:15:55 crc kubenswrapper[4982]: I0224 15:15:55.181991 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 15:15:56 crc kubenswrapper[4982]: I0224 15:15:56.159886 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerStarted","Data":"ff61aad5c6ce7cd609d8035ca17027d2690a6576ebe0a88490ad955db249061f"} Feb 24 15:15:56 crc kubenswrapper[4982]: I0224 15:15:56.161261 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"802ee268-f1cc-4138-8361-c61c4b2d005a","Type":"ContainerStarted","Data":"b7992b03b3846479c9a0422630ca0af2fd021837cdb58232ee8f49e7b2b205f2"} Feb 24 15:15:56 crc kubenswrapper[4982]: I0224 15:15:56.161311 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"802ee268-f1cc-4138-8361-c61c4b2d005a","Type":"ContainerStarted","Data":"1a16be7828e82bbbba103ca616f4cc39eb1f5b00db41950297d11f23a3fe47b7"} Feb 24 15:15:56 crc kubenswrapper[4982]: I0224 15:15:56.161405 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 24 15:15:56 crc kubenswrapper[4982]: I0224 15:15:56.177346 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.177328244 podStartE2EDuration="2.177328244s" podCreationTimestamp="2026-02-24 15:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:15:56.176681607 +0000 UTC m=+1617.795740100" watchObservedRunningTime="2026-02-24 15:15:56.177328244 +0000 UTC m=+1617.796386737" Feb 24 15:15:57 crc kubenswrapper[4982]: I0224 15:15:57.175631 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerStarted","Data":"8373f31df7bfed8db4569cd59accdb15bfec8d9d70c2ec1e6085e25851fc8e2b"} Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.193988 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerStarted","Data":"b9d20b5e02afe1ff6e63dff4b18c545b8a475da29835b5a5e3e588ee5be0cdc9"} Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.203768 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-fjb7l"] Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.205636 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fjb7l" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.228855 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-296f-account-create-update-tkrxl"] Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.230307 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.233327 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.250627 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-fjb7l"] Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.294708 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-296f-account-create-update-tkrxl"] Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.333772 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2p7s\" (UniqueName: \"kubernetes.io/projected/62cdf76f-7239-423a-93cf-16b6c38c3525-kube-api-access-l2p7s\") pod \"aodh-db-create-fjb7l\" (UID: \"62cdf76f-7239-423a-93cf-16b6c38c3525\") " pod="openstack/aodh-db-create-fjb7l" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.333833 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cdf76f-7239-423a-93cf-16b6c38c3525-operator-scripts\") pod \"aodh-db-create-fjb7l\" (UID: \"62cdf76f-7239-423a-93cf-16b6c38c3525\") " pod="openstack/aodh-db-create-fjb7l" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.436350 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2p7s\" (UniqueName: \"kubernetes.io/projected/62cdf76f-7239-423a-93cf-16b6c38c3525-kube-api-access-l2p7s\") pod \"aodh-db-create-fjb7l\" (UID: \"62cdf76f-7239-423a-93cf-16b6c38c3525\") " pod="openstack/aodh-db-create-fjb7l" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.436447 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cdf76f-7239-423a-93cf-16b6c38c3525-operator-scripts\") pod \"aodh-db-create-fjb7l\" (UID: \"62cdf76f-7239-423a-93cf-16b6c38c3525\") " pod="openstack/aodh-db-create-fjb7l" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.436563 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c574c2e-47b9-43c8-b78d-8a10566c717a-operator-scripts\") pod \"aodh-296f-account-create-update-tkrxl\" (UID: \"4c574c2e-47b9-43c8-b78d-8a10566c717a\") " pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.436649 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629cs\" (UniqueName: \"kubernetes.io/projected/4c574c2e-47b9-43c8-b78d-8a10566c717a-kube-api-access-629cs\") pod \"aodh-296f-account-create-update-tkrxl\" (UID: \"4c574c2e-47b9-43c8-b78d-8a10566c717a\") " pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.437589 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cdf76f-7239-423a-93cf-16b6c38c3525-operator-scripts\") pod \"aodh-db-create-fjb7l\" (UID: \"62cdf76f-7239-423a-93cf-16b6c38c3525\") " pod="openstack/aodh-db-create-fjb7l" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.473084 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2p7s\" (UniqueName: \"kubernetes.io/projected/62cdf76f-7239-423a-93cf-16b6c38c3525-kube-api-access-l2p7s\") pod \"aodh-db-create-fjb7l\" (UID: \"62cdf76f-7239-423a-93cf-16b6c38c3525\") " pod="openstack/aodh-db-create-fjb7l" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.529373 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fjb7l" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.538484 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-629cs\" (UniqueName: \"kubernetes.io/projected/4c574c2e-47b9-43c8-b78d-8a10566c717a-kube-api-access-629cs\") pod \"aodh-296f-account-create-update-tkrxl\" (UID: \"4c574c2e-47b9-43c8-b78d-8a10566c717a\") " pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.538775 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c574c2e-47b9-43c8-b78d-8a10566c717a-operator-scripts\") pod \"aodh-296f-account-create-update-tkrxl\" (UID: \"4c574c2e-47b9-43c8-b78d-8a10566c717a\") " pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.539609 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c574c2e-47b9-43c8-b78d-8a10566c717a-operator-scripts\") pod \"aodh-296f-account-create-update-tkrxl\" (UID: \"4c574c2e-47b9-43c8-b78d-8a10566c717a\") " pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.568348 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-629cs\" (UniqueName: \"kubernetes.io/projected/4c574c2e-47b9-43c8-b78d-8a10566c717a-kube-api-access-629cs\") pod \"aodh-296f-account-create-update-tkrxl\" (UID: \"4c574c2e-47b9-43c8-b78d-8a10566c717a\") " pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:15:58 crc kubenswrapper[4982]: I0224 15:15:58.853280 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:15:59 crc kubenswrapper[4982]: I0224 15:15:59.060366 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-fjb7l"] Feb 24 15:15:59 crc kubenswrapper[4982]: I0224 15:15:59.235986 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fjb7l" event={"ID":"62cdf76f-7239-423a-93cf-16b6c38c3525","Type":"ContainerStarted","Data":"5f18991354452d96867559d118fb72989c9da14ec489e84cd2d3c0e2ace91439"} Feb 24 15:15:59 crc kubenswrapper[4982]: I0224 15:15:59.377971 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-296f-account-create-update-tkrxl"] Feb 24 15:15:59 crc kubenswrapper[4982]: I0224 15:15:59.403810 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.133945 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532436-hqpch"] Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.136037 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532436-hqpch" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.140417 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.140732 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.140826 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.154466 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532436-hqpch"] Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.252072 4982 generic.go:334] "Generic (PLEG): container finished" podID="62cdf76f-7239-423a-93cf-16b6c38c3525" containerID="da56872de2b8d7292f2b680fc740a16738467fd869d631a5999eac452d2b5756" exitCode=0 Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.252172 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fjb7l" event={"ID":"62cdf76f-7239-423a-93cf-16b6c38c3525","Type":"ContainerDied","Data":"da56872de2b8d7292f2b680fc740a16738467fd869d631a5999eac452d2b5756"} Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.257219 4982 generic.go:334] "Generic (PLEG): container finished" podID="4c574c2e-47b9-43c8-b78d-8a10566c717a" containerID="5c1dde1444e0601ced52d96dca09169cec3c135d70e24f8e143b002dad0f65c6" exitCode=0 Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.257403 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-296f-account-create-update-tkrxl" event={"ID":"4c574c2e-47b9-43c8-b78d-8a10566c717a","Type":"ContainerDied","Data":"5c1dde1444e0601ced52d96dca09169cec3c135d70e24f8e143b002dad0f65c6"} Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.257454 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-296f-account-create-update-tkrxl" event={"ID":"4c574c2e-47b9-43c8-b78d-8a10566c717a","Type":"ContainerStarted","Data":"540d8381e9aeae6c2cbf1ae93f886da134592c46194e951a1fd239f1c4eb824f"} Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.261770 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerStarted","Data":"007d9578125210c050d09ee07e5b12ac9640a3260058d0197df155c73df0d661"} Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.262047 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.300048 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkf7q\" (UniqueName: \"kubernetes.io/projected/e218d80b-e585-4a84-8d6f-d68d54b5bc20-kube-api-access-rkf7q\") pod \"auto-csr-approver-29532436-hqpch\" (UID: \"e218d80b-e585-4a84-8d6f-d68d54b5bc20\") " pod="openshift-infra/auto-csr-approver-29532436-hqpch" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.300782 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6835797019999998 podStartE2EDuration="6.300760921s" podCreationTimestamp="2026-02-24 15:15:54 +0000 UTC" firstStartedPulling="2026-02-24 15:15:55.107267297 +0000 UTC m=+1616.726325810" lastFinishedPulling="2026-02-24 15:15:59.724448536 +0000 UTC m=+1621.343507029" observedRunningTime="2026-02-24 15:16:00.286315014 +0000 UTC m=+1621.905373517" watchObservedRunningTime="2026-02-24 15:16:00.300760921 +0000 UTC m=+1621.919819454" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.407994 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkf7q\" (UniqueName: \"kubernetes.io/projected/e218d80b-e585-4a84-8d6f-d68d54b5bc20-kube-api-access-rkf7q\") pod \"auto-csr-approver-29532436-hqpch\" (UID: \"e218d80b-e585-4a84-8d6f-d68d54b5bc20\") " pod="openshift-infra/auto-csr-approver-29532436-hqpch" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.433567 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkf7q\" (UniqueName: \"kubernetes.io/projected/e218d80b-e585-4a84-8d6f-d68d54b5bc20-kube-api-access-rkf7q\") pod \"auto-csr-approver-29532436-hqpch\" (UID: \"e218d80b-e585-4a84-8d6f-d68d54b5bc20\") " pod="openshift-infra/auto-csr-approver-29532436-hqpch" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.475577 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532436-hqpch" Feb 24 15:16:00 crc kubenswrapper[4982]: I0224 15:16:00.973632 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532436-hqpch"] Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.279349 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532436-hqpch" event={"ID":"e218d80b-e585-4a84-8d6f-d68d54b5bc20","Type":"ContainerStarted","Data":"4c69e341903dbb95ab01d24591688dcb9e8d3b21691a02ca89aa45ce68af3237"} Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.916213 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.923226 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fjb7l" Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.956343 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-629cs\" (UniqueName: \"kubernetes.io/projected/4c574c2e-47b9-43c8-b78d-8a10566c717a-kube-api-access-629cs\") pod \"4c574c2e-47b9-43c8-b78d-8a10566c717a\" (UID: \"4c574c2e-47b9-43c8-b78d-8a10566c717a\") " Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.956707 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2p7s\" (UniqueName: \"kubernetes.io/projected/62cdf76f-7239-423a-93cf-16b6c38c3525-kube-api-access-l2p7s\") pod \"62cdf76f-7239-423a-93cf-16b6c38c3525\" (UID: \"62cdf76f-7239-423a-93cf-16b6c38c3525\") " Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.956756 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cdf76f-7239-423a-93cf-16b6c38c3525-operator-scripts\") pod \"62cdf76f-7239-423a-93cf-16b6c38c3525\" (UID: \"62cdf76f-7239-423a-93cf-16b6c38c3525\") " Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.956802 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c574c2e-47b9-43c8-b78d-8a10566c717a-operator-scripts\") pod \"4c574c2e-47b9-43c8-b78d-8a10566c717a\" (UID: \"4c574c2e-47b9-43c8-b78d-8a10566c717a\") " Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.958486 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c574c2e-47b9-43c8-b78d-8a10566c717a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c574c2e-47b9-43c8-b78d-8a10566c717a" (UID: "4c574c2e-47b9-43c8-b78d-8a10566c717a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.958537 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cdf76f-7239-423a-93cf-16b6c38c3525-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62cdf76f-7239-423a-93cf-16b6c38c3525" (UID: "62cdf76f-7239-423a-93cf-16b6c38c3525"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.976965 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c574c2e-47b9-43c8-b78d-8a10566c717a-kube-api-access-629cs" (OuterVolumeSpecName: "kube-api-access-629cs") pod "4c574c2e-47b9-43c8-b78d-8a10566c717a" (UID: "4c574c2e-47b9-43c8-b78d-8a10566c717a"). InnerVolumeSpecName "kube-api-access-629cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:01 crc kubenswrapper[4982]: I0224 15:16:01.977445 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62cdf76f-7239-423a-93cf-16b6c38c3525-kube-api-access-l2p7s" (OuterVolumeSpecName: "kube-api-access-l2p7s") pod "62cdf76f-7239-423a-93cf-16b6c38c3525" (UID: "62cdf76f-7239-423a-93cf-16b6c38c3525"). InnerVolumeSpecName "kube-api-access-l2p7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.059953 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2p7s\" (UniqueName: \"kubernetes.io/projected/62cdf76f-7239-423a-93cf-16b6c38c3525-kube-api-access-l2p7s\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.060241 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cdf76f-7239-423a-93cf-16b6c38c3525-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.060386 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c574c2e-47b9-43c8-b78d-8a10566c717a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.060472 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-629cs\" (UniqueName: \"kubernetes.io/projected/4c574c2e-47b9-43c8-b78d-8a10566c717a-kube-api-access-629cs\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.294432 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-296f-account-create-update-tkrxl" Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.294434 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-296f-account-create-update-tkrxl" event={"ID":"4c574c2e-47b9-43c8-b78d-8a10566c717a","Type":"ContainerDied","Data":"540d8381e9aeae6c2cbf1ae93f886da134592c46194e951a1fd239f1c4eb824f"} Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.294550 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540d8381e9aeae6c2cbf1ae93f886da134592c46194e951a1fd239f1c4eb824f" Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.296292 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fjb7l" event={"ID":"62cdf76f-7239-423a-93cf-16b6c38c3525","Type":"ContainerDied","Data":"5f18991354452d96867559d118fb72989c9da14ec489e84cd2d3c0e2ace91439"} Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.296789 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f18991354452d96867559d118fb72989c9da14ec489e84cd2d3c0e2ace91439" Feb 24 15:16:02 crc kubenswrapper[4982]: I0224 15:16:02.296844 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fjb7l" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.654026 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-zb79k"] Feb 24 15:16:03 crc kubenswrapper[4982]: E0224 15:16:03.654738 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c574c2e-47b9-43c8-b78d-8a10566c717a" containerName="mariadb-account-create-update" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.654770 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c574c2e-47b9-43c8-b78d-8a10566c717a" containerName="mariadb-account-create-update" Feb 24 15:16:03 crc kubenswrapper[4982]: E0224 15:16:03.654801 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cdf76f-7239-423a-93cf-16b6c38c3525" containerName="mariadb-database-create" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.654807 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cdf76f-7239-423a-93cf-16b6c38c3525" containerName="mariadb-database-create" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.655029 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c574c2e-47b9-43c8-b78d-8a10566c717a" containerName="mariadb-account-create-update" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.655038 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cdf76f-7239-423a-93cf-16b6c38c3525" containerName="mariadb-database-create" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.655822 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.660719 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.675695 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-2qctw" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.675878 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.675999 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.697692 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fjv78" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="registry-server" probeResult="failure" output=< Feb 24 15:16:03 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:16:03 crc kubenswrapper[4982]: > Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.707927 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-scripts\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.708029 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-combined-ca-bundle\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.708135 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnr8r\" (UniqueName: \"kubernetes.io/projected/e3f1556f-4320-4c99-9296-8526ded51204-kube-api-access-wnr8r\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.708170 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-config-data\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.737677 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zb79k"] Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.810368 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnr8r\" (UniqueName: \"kubernetes.io/projected/e3f1556f-4320-4c99-9296-8526ded51204-kube-api-access-wnr8r\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.810428 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-config-data\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.810518 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-scripts\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.810579 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-combined-ca-bundle\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.827912 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-config-data\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.831467 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-combined-ca-bundle\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.846982 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-scripts\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:03 crc kubenswrapper[4982]: I0224 15:16:03.850134 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnr8r\" (UniqueName: \"kubernetes.io/projected/e3f1556f-4320-4c99-9296-8526ded51204-kube-api-access-wnr8r\") pod \"aodh-db-sync-zb79k\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:04 crc kubenswrapper[4982]: I0224 15:16:04.011683 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:04 crc kubenswrapper[4982]: I0224 15:16:04.336859 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532436-hqpch" event={"ID":"e218d80b-e585-4a84-8d6f-d68d54b5bc20","Type":"ContainerStarted","Data":"06ec03fc44f588987a1b5fed74a2340c9f6627c1fbe206cd3373c5373247737b"} Feb 24 15:16:04 crc kubenswrapper[4982]: I0224 15:16:04.376951 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532436-hqpch" podStartSLOduration=2.472345048 podStartE2EDuration="4.376930825s" podCreationTimestamp="2026-02-24 15:16:00 +0000 UTC" firstStartedPulling="2026-02-24 15:16:00.970835621 +0000 UTC m=+1622.589894114" lastFinishedPulling="2026-02-24 15:16:02.875421398 +0000 UTC m=+1624.494479891" observedRunningTime="2026-02-24 15:16:04.36984164 +0000 UTC m=+1625.988900143" watchObservedRunningTime="2026-02-24 15:16:04.376930825 +0000 UTC m=+1625.995989318" Feb 24 15:16:04 crc kubenswrapper[4982]: I0224 15:16:04.547348 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zb79k"] Feb 24 15:16:04 crc kubenswrapper[4982]: I0224 15:16:04.655529 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.349038 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zb79k" event={"ID":"e3f1556f-4320-4c99-9296-8526ded51204","Type":"ContainerStarted","Data":"47b1244f18166bda5f82a6dbc6eca93925a683ac4d577f46638900f578ba8497"} Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.383973 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-h76wg"] Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.387986 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.401009 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h76wg"] Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.401586 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.401773 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.457369 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-config-data\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.457411 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfj6h\" (UniqueName: \"kubernetes.io/projected/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-kube-api-access-dfj6h\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.457573 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.457708 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-scripts\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.559973 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfj6h\" (UniqueName: \"kubernetes.io/projected/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-kube-api-access-dfj6h\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.560034 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-config-data\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.560171 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.560359 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-scripts\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.569228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-config-data\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.581878 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-scripts\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.584059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.613461 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.615019 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.624955 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfj6h\" (UniqueName: \"kubernetes.io/projected/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-kube-api-access-dfj6h\") pod \"nova-cell0-cell-mapping-h76wg\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.632236 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.657617 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.660390 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.668625 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.669570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.669663 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-config-data\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.669705 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrq48\" (UniqueName: \"kubernetes.io/projected/c96261c8-fbcb-4f93-8b64-606352364faf-kube-api-access-hrq48\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.669751 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.669777 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mx8v\" (UniqueName: \"kubernetes.io/projected/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-kube-api-access-8mx8v\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.669848 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-logs\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.669887 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-config-data\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.686010 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.758081 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.762895 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.799180 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-logs\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.799322 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-config-data\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.799422 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.799650 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-config-data\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.799730 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrq48\" (UniqueName: \"kubernetes.io/projected/c96261c8-fbcb-4f93-8b64-606352364faf-kube-api-access-hrq48\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.799831 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.799889 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mx8v\" (UniqueName: \"kubernetes.io/projected/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-kube-api-access-8mx8v\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.801350 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-logs\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.822545 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-config-data\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.828576 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.828694 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.832732 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.836319 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.849332 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-config-data\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.850229 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.858656 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrq48\" (UniqueName: \"kubernetes.io/projected/c96261c8-fbcb-4f93-8b64-606352364faf-kube-api-access-hrq48\") pod \"nova-scheduler-0\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.874217 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.889810 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mx8v\" (UniqueName: \"kubernetes.io/projected/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-kube-api-access-8mx8v\") pod \"nova-api-0\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " pod="openstack/nova-api-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.995953 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-config-data\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.996129 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.996314 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmslc\" (UniqueName: \"kubernetes.io/projected/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-kube-api-access-nmslc\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:05 crc kubenswrapper[4982]: I0224 15:16:05.996798 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-logs\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.011627 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.041659 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-8twsb"] Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.043895 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.063174 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.094448 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-8twsb"] Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.107418 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.107529 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-logs\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.107708 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-config-data\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.107846 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.107911 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.107944 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmslc\" (UniqueName: \"kubernetes.io/projected/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-kube-api-access-nmslc\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.107989 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9ws\" (UniqueName: \"kubernetes.io/projected/944cf8b8-faab-4ad6-aae7-407dd871c312-kube-api-access-5z9ws\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.108031 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-config\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.108081 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.108178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.111302 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-logs\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.150679 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.160604 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.162186 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.169189 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.197907 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmslc\" (UniqueName: \"kubernetes.io/projected/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-kube-api-access-nmslc\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.197943 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-config-data\") pod \"nova-metadata-0\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.210889 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.216946 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.217254 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.217302 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28zgj\" (UniqueName: \"kubernetes.io/projected/2abbb1b6-a124-4153-8eb5-350fbb242d28-kube-api-access-28zgj\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.217372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.217422 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.217465 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z9ws\" (UniqueName: \"kubernetes.io/projected/944cf8b8-faab-4ad6-aae7-407dd871c312-kube-api-access-5z9ws\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.225120 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-config\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.225203 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.225316 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.231793 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.233441 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.233988 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.237637 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.252043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-config\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.279605 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z9ws\" (UniqueName: \"kubernetes.io/projected/944cf8b8-faab-4ad6-aae7-407dd871c312-kube-api-access-5z9ws\") pod \"dnsmasq-dns-568d7fd7cf-8twsb\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.335067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.335144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28zgj\" (UniqueName: \"kubernetes.io/projected/2abbb1b6-a124-4153-8eb5-350fbb242d28-kube-api-access-28zgj\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.335229 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.352708 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.357998 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.390010 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28zgj\" (UniqueName: \"kubernetes.io/projected/2abbb1b6-a124-4153-8eb5-350fbb242d28-kube-api-access-28zgj\") pod \"nova-cell1-novncproxy-0\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.395796 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.430099 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.535410 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:06 crc kubenswrapper[4982]: I0224 15:16:06.686324 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h76wg"] Feb 24 15:16:06 crc kubenswrapper[4982]: W0224 15:16:06.716226 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a6b69b0_ea04_45f2_9961_b3c44fec32b6.slice/crio-6495d4c84d3c6c0ec53b3cb2d5c2c6f1954bf5f5689dc4791bd89e470efe8518 WatchSource:0}: Error finding container 6495d4c84d3c6c0ec53b3cb2d5c2c6f1954bf5f5689dc4791bd89e470efe8518: Status 404 returned error can't find the container with id 6495d4c84d3c6c0ec53b3cb2d5c2c6f1954bf5f5689dc4791bd89e470efe8518 Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.250970 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:07 crc kubenswrapper[4982]: W0224 15:16:07.270265 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc96261c8_fbcb_4f93_8b64_606352364faf.slice/crio-d91646cf787b6d8856a3fb7d4cdf03fc93aeec2754de7e3870bcdc9267184ee5 WatchSource:0}: Error finding container d91646cf787b6d8856a3fb7d4cdf03fc93aeec2754de7e3870bcdc9267184ee5: Status 404 returned error can't find the container with id d91646cf787b6d8856a3fb7d4cdf03fc93aeec2754de7e3870bcdc9267184ee5 Feb 24 15:16:07 crc kubenswrapper[4982]: W0224 15:16:07.272153 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1fe5cec_0fbe_4f2d_a412_5ab9637e61ef.slice/crio-1db044bd2b50f2cedac484febc3f9d3f2e63d0686bda714ce59f2cbef9eff96e WatchSource:0}: Error finding container 1db044bd2b50f2cedac484febc3f9d3f2e63d0686bda714ce59f2cbef9eff96e: Status 404 returned error can't find the container with id 1db044bd2b50f2cedac484febc3f9d3f2e63d0686bda714ce59f2cbef9eff96e Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.276231 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.428255 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wd76l"] Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.440452 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.443795 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.444054 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.447445 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef","Type":"ContainerStarted","Data":"1db044bd2b50f2cedac484febc3f9d3f2e63d0686bda714ce59f2cbef9eff96e"} Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.450531 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h76wg" event={"ID":"0a6b69b0-ea04-45f2-9961-b3c44fec32b6","Type":"ContainerStarted","Data":"0792e97b55948427a1e8ad4017199e9342c5e561f1b95377276378d9b189fb6a"} Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.450557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h76wg" event={"ID":"0a6b69b0-ea04-45f2-9961-b3c44fec32b6","Type":"ContainerStarted","Data":"6495d4c84d3c6c0ec53b3cb2d5c2c6f1954bf5f5689dc4791bd89e470efe8518"} Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.453985 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c96261c8-fbcb-4f93-8b64-606352364faf","Type":"ContainerStarted","Data":"d91646cf787b6d8856a3fb7d4cdf03fc93aeec2754de7e3870bcdc9267184ee5"} Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.462628 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wd76l"] Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.474276 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm96j\" (UniqueName: \"kubernetes.io/projected/c95d73d5-4913-4638-bfa1-fd9c7539ed88-kube-api-access-qm96j\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.474415 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.474526 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-scripts\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.474626 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-config-data\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.486575 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-h76wg" podStartSLOduration=2.486557819 podStartE2EDuration="2.486557819s" podCreationTimestamp="2026-02-24 15:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:07.479484745 +0000 UTC m=+1629.098543248" watchObservedRunningTime="2026-02-24 15:16:07.486557819 +0000 UTC m=+1629.105616312" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.577077 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm96j\" (UniqueName: \"kubernetes.io/projected/c95d73d5-4913-4638-bfa1-fd9c7539ed88-kube-api-access-qm96j\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.577610 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.577717 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-scripts\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.577813 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-config-data\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.586113 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-config-data\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.587721 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-scripts\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.589202 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.616441 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm96j\" (UniqueName: \"kubernetes.io/projected/c95d73d5-4913-4638-bfa1-fd9c7539ed88-kube-api-access-qm96j\") pod \"nova-cell1-conductor-db-sync-wd76l\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.693924 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.768150 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-8twsb"] Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.775595 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:07 crc kubenswrapper[4982]: I0224 15:16:07.783971 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:08 crc kubenswrapper[4982]: I0224 15:16:08.480830 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3","Type":"ContainerStarted","Data":"251585d72c03e5d86b429375a68eb95691e085fb33d8db424cad6fdba2e0eac4"} Feb 24 15:16:08 crc kubenswrapper[4982]: I0224 15:16:08.486790 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2abbb1b6-a124-4153-8eb5-350fbb242d28","Type":"ContainerStarted","Data":"4b73520953c633316967107ab36044c6ba1afca47704a4f75548924e89ea66f0"} Feb 24 15:16:08 crc kubenswrapper[4982]: I0224 15:16:08.504281 4982 generic.go:334] "Generic (PLEG): container finished" podID="944cf8b8-faab-4ad6-aae7-407dd871c312" containerID="2251205940f10cb6229112e1f64b2266ba8d488ad8a8a32499643d431493c8e0" exitCode=0 Feb 24 15:16:08 crc kubenswrapper[4982]: I0224 15:16:08.504421 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" event={"ID":"944cf8b8-faab-4ad6-aae7-407dd871c312","Type":"ContainerDied","Data":"2251205940f10cb6229112e1f64b2266ba8d488ad8a8a32499643d431493c8e0"} Feb 24 15:16:08 crc kubenswrapper[4982]: I0224 15:16:08.504456 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" event={"ID":"944cf8b8-faab-4ad6-aae7-407dd871c312","Type":"ContainerStarted","Data":"cfa0046345b7449648a99e36a44b1728f813f923f7a39c204a07aee2d5a4f01f"} Feb 24 15:16:08 crc kubenswrapper[4982]: I0224 15:16:08.520931 4982 generic.go:334] "Generic (PLEG): container finished" podID="e218d80b-e585-4a84-8d6f-d68d54b5bc20" containerID="06ec03fc44f588987a1b5fed74a2340c9f6627c1fbe206cd3373c5373247737b" exitCode=0 Feb 24 15:16:08 crc kubenswrapper[4982]: I0224 15:16:08.521630 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532436-hqpch" event={"ID":"e218d80b-e585-4a84-8d6f-d68d54b5bc20","Type":"ContainerDied","Data":"06ec03fc44f588987a1b5fed74a2340c9f6627c1fbe206cd3373c5373247737b"} Feb 24 15:16:08 crc kubenswrapper[4982]: I0224 15:16:08.605835 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wd76l"] Feb 24 15:16:09 crc kubenswrapper[4982]: I0224 15:16:09.551331 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" event={"ID":"944cf8b8-faab-4ad6-aae7-407dd871c312","Type":"ContainerStarted","Data":"6222cd0265a7ba041bb2a07a2358f086425b3e602fd86b46368de0fdb860bc92"} Feb 24 15:16:09 crc kubenswrapper[4982]: I0224 15:16:09.551711 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:09 crc kubenswrapper[4982]: I0224 15:16:09.564571 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wd76l" event={"ID":"c95d73d5-4913-4638-bfa1-fd9c7539ed88","Type":"ContainerStarted","Data":"8c5182fdc9f626ddde9dd9cd0a446a8c1ddb70a8d9a38a9e21ae823f7c7d02fd"} Feb 24 15:16:09 crc kubenswrapper[4982]: I0224 15:16:09.564632 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wd76l" event={"ID":"c95d73d5-4913-4638-bfa1-fd9c7539ed88","Type":"ContainerStarted","Data":"aa951bacfb5841f2056c6437d6412a3bad970dc83cee3fc931c2aec13e8e133f"} Feb 24 15:16:09 crc kubenswrapper[4982]: I0224 15:16:09.576374 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" podStartSLOduration=4.576357475 podStartE2EDuration="4.576357475s" podCreationTimestamp="2026-02-24 15:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:09.57500609 +0000 UTC m=+1631.194064593" watchObservedRunningTime="2026-02-24 15:16:09.576357475 +0000 UTC m=+1631.195415968" Feb 24 15:16:09 crc kubenswrapper[4982]: I0224 15:16:09.602571 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wd76l" podStartSLOduration=2.602547918 podStartE2EDuration="2.602547918s" podCreationTimestamp="2026-02-24 15:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:09.600033723 +0000 UTC m=+1631.219092216" watchObservedRunningTime="2026-02-24 15:16:09.602547918 +0000 UTC m=+1631.221606411" Feb 24 15:16:09 crc kubenswrapper[4982]: I0224 15:16:09.854851 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:09 crc kubenswrapper[4982]: I0224 15:16:09.886694 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:11 crc kubenswrapper[4982]: I0224 15:16:11.592151 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532436-hqpch" event={"ID":"e218d80b-e585-4a84-8d6f-d68d54b5bc20","Type":"ContainerDied","Data":"4c69e341903dbb95ab01d24591688dcb9e8d3b21691a02ca89aa45ce68af3237"} Feb 24 15:16:11 crc kubenswrapper[4982]: I0224 15:16:11.592611 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c69e341903dbb95ab01d24591688dcb9e8d3b21691a02ca89aa45ce68af3237" Feb 24 15:16:11 crc kubenswrapper[4982]: I0224 15:16:11.621230 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532436-hqpch" Feb 24 15:16:11 crc kubenswrapper[4982]: I0224 15:16:11.755465 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkf7q\" (UniqueName: \"kubernetes.io/projected/e218d80b-e585-4a84-8d6f-d68d54b5bc20-kube-api-access-rkf7q\") pod \"e218d80b-e585-4a84-8d6f-d68d54b5bc20\" (UID: \"e218d80b-e585-4a84-8d6f-d68d54b5bc20\") " Feb 24 15:16:11 crc kubenswrapper[4982]: I0224 15:16:11.767881 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e218d80b-e585-4a84-8d6f-d68d54b5bc20-kube-api-access-rkf7q" (OuterVolumeSpecName: "kube-api-access-rkf7q") pod "e218d80b-e585-4a84-8d6f-d68d54b5bc20" (UID: "e218d80b-e585-4a84-8d6f-d68d54b5bc20"). InnerVolumeSpecName "kube-api-access-rkf7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:11 crc kubenswrapper[4982]: I0224 15:16:11.858559 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkf7q\" (UniqueName: \"kubernetes.io/projected/e218d80b-e585-4a84-8d6f-d68d54b5bc20-kube-api-access-rkf7q\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:12 crc kubenswrapper[4982]: I0224 15:16:12.604791 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532436-hqpch" Feb 24 15:16:12 crc kubenswrapper[4982]: I0224 15:16:12.690591 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:16:12 crc kubenswrapper[4982]: I0224 15:16:12.693934 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532430-mxqwb"] Feb 24 15:16:12 crc kubenswrapper[4982]: I0224 15:16:12.716605 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532430-mxqwb"] Feb 24 15:16:12 crc kubenswrapper[4982]: I0224 15:16:12.773829 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:16:12 crc kubenswrapper[4982]: I0224 15:16:12.933117 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fjv78"] Feb 24 15:16:13 crc kubenswrapper[4982]: I0224 15:16:13.163076 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9778f9-391c-4a7d-b680-09ed29470da1" path="/var/lib/kubelet/pods/3e9778f9-391c-4a7d-b680-09ed29470da1/volumes" Feb 24 15:16:14 crc kubenswrapper[4982]: I0224 15:16:14.627391 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fjv78" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="registry-server" containerID="cri-o://73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648" gracePeriod=2 Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.210344 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-48448"] Feb 24 15:16:15 crc kubenswrapper[4982]: E0224 15:16:15.217338 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e218d80b-e585-4a84-8d6f-d68d54b5bc20" containerName="oc" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.217374 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e218d80b-e585-4a84-8d6f-d68d54b5bc20" containerName="oc" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.217733 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e218d80b-e585-4a84-8d6f-d68d54b5bc20" containerName="oc" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.219828 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.248324 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48448"] Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.356905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-catalog-content\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.357026 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-utilities\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.357572 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwk66\" (UniqueName: \"kubernetes.io/projected/d75afd15-dabd-4b90-845e-288d468a0270-kube-api-access-cwk66\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.459598 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwk66\" (UniqueName: \"kubernetes.io/projected/d75afd15-dabd-4b90-845e-288d468a0270-kube-api-access-cwk66\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.459656 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-catalog-content\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.459733 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-utilities\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.460301 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-utilities\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.460776 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-catalog-content\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.516679 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwk66\" (UniqueName: \"kubernetes.io/projected/d75afd15-dabd-4b90-845e-288d468a0270-kube-api-access-cwk66\") pod \"redhat-marketplace-48448\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.640308 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.642767 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.644302 4982 generic.go:334] "Generic (PLEG): container finished" podID="29128286-55dc-4660-915e-21aed6bacdbe" containerID="73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648" exitCode=0 Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.644339 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjv78" event={"ID":"29128286-55dc-4660-915e-21aed6bacdbe","Type":"ContainerDied","Data":"73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648"} Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.644363 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjv78" event={"ID":"29128286-55dc-4660-915e-21aed6bacdbe","Type":"ContainerDied","Data":"3a1403cbad658c763f91e96e04ad78a71808f25955bb5750a26f704b31638fef"} Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.644380 4982 scope.go:117] "RemoveContainer" containerID="73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.738010 4982 scope.go:117] "RemoveContainer" containerID="53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.766125 4982 scope.go:117] "RemoveContainer" containerID="4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.767036 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-catalog-content\") pod \"29128286-55dc-4660-915e-21aed6bacdbe\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.767074 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hds9d\" (UniqueName: \"kubernetes.io/projected/29128286-55dc-4660-915e-21aed6bacdbe-kube-api-access-hds9d\") pod \"29128286-55dc-4660-915e-21aed6bacdbe\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.767343 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-utilities\") pod \"29128286-55dc-4660-915e-21aed6bacdbe\" (UID: \"29128286-55dc-4660-915e-21aed6bacdbe\") " Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.768468 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-utilities" (OuterVolumeSpecName: "utilities") pod "29128286-55dc-4660-915e-21aed6bacdbe" (UID: "29128286-55dc-4660-915e-21aed6bacdbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.770310 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.804349 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29128286-55dc-4660-915e-21aed6bacdbe-kube-api-access-hds9d" (OuterVolumeSpecName: "kube-api-access-hds9d") pod "29128286-55dc-4660-915e-21aed6bacdbe" (UID: "29128286-55dc-4660-915e-21aed6bacdbe"). InnerVolumeSpecName "kube-api-access-hds9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.816068 4982 scope.go:117] "RemoveContainer" containerID="73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648" Feb 24 15:16:15 crc kubenswrapper[4982]: E0224 15:16:15.816797 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648\": container with ID starting with 73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648 not found: ID does not exist" containerID="73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.816841 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648"} err="failed to get container status \"73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648\": rpc error: code = NotFound desc = could not find container \"73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648\": container with ID starting with 73c23f2ae85c35c3fb5194d6e44b9f8c1ddb5ba20eb88261a515cadbccd50648 not found: ID does not exist" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.816864 4982 scope.go:117] "RemoveContainer" containerID="53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4" Feb 24 15:16:15 crc kubenswrapper[4982]: E0224 15:16:15.817229 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4\": container with ID starting with 53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4 not found: ID does not exist" containerID="53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.817247 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4"} err="failed to get container status \"53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4\": rpc error: code = NotFound desc = could not find container \"53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4\": container with ID starting with 53e475e8422f820d36a2155c892e71d02b1db51e6ab89210312c3afcc9f7a1f4 not found: ID does not exist" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.817261 4982 scope.go:117] "RemoveContainer" containerID="4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793" Feb 24 15:16:15 crc kubenswrapper[4982]: E0224 15:16:15.817487 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793\": container with ID starting with 4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793 not found: ID does not exist" containerID="4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.817519 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793"} err="failed to get container status \"4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793\": rpc error: code = NotFound desc = could not find container \"4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793\": container with ID starting with 4c08dc939f54bc5cf6dbeb4b2d44e44f4da61d37b45f8720e768b499c8c05793 not found: ID does not exist" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.836389 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29128286-55dc-4660-915e-21aed6bacdbe" (UID: "29128286-55dc-4660-915e-21aed6bacdbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.874523 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29128286-55dc-4660-915e-21aed6bacdbe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:15 crc kubenswrapper[4982]: I0224 15:16:15.874549 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hds9d\" (UniqueName: \"kubernetes.io/projected/29128286-55dc-4660-915e-21aed6bacdbe-kube-api-access-hds9d\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.234719 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48448"] Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.433033 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.521785 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sj6rr"] Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.522020 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" podUID="6836c061-e1c3-4897-824e-175a86614fad" containerName="dnsmasq-dns" containerID="cri-o://edfd0654105e29eb0825e12168991b1cce44e29e78dac9540577844d74df8c95" gracePeriod=10 Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.667197 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjv78" Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.677625 4982 generic.go:334] "Generic (PLEG): container finished" podID="6836c061-e1c3-4897-824e-175a86614fad" containerID="edfd0654105e29eb0825e12168991b1cce44e29e78dac9540577844d74df8c95" exitCode=0 Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.677927 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" event={"ID":"6836c061-e1c3-4897-824e-175a86614fad","Type":"ContainerDied","Data":"edfd0654105e29eb0825e12168991b1cce44e29e78dac9540577844d74df8c95"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.679833 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zb79k" event={"ID":"e3f1556f-4320-4c99-9296-8526ded51204","Type":"ContainerStarted","Data":"13af9be5ac39e1a78805e550bdd61153f48819b38ad63336178d6bc9d30683ce"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.699733 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef","Type":"ContainerStarted","Data":"436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.699781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef","Type":"ContainerStarted","Data":"a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.709042 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2abbb1b6-a124-4153-8eb5-350fbb242d28","Type":"ContainerStarted","Data":"09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.709193 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2abbb1b6-a124-4153-8eb5-350fbb242d28" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d" gracePeriod=30 Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.710874 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-zb79k" podStartSLOduration=3.066166407 podStartE2EDuration="13.710853736s" podCreationTimestamp="2026-02-24 15:16:03 +0000 UTC" firstStartedPulling="2026-02-24 15:16:04.562258127 +0000 UTC m=+1626.181316620" lastFinishedPulling="2026-02-24 15:16:15.206945456 +0000 UTC m=+1636.826003949" observedRunningTime="2026-02-24 15:16:16.707977191 +0000 UTC m=+1638.327035684" watchObservedRunningTime="2026-02-24 15:16:16.710853736 +0000 UTC m=+1638.329912229" Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.727193 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c96261c8-fbcb-4f93-8b64-606352364faf","Type":"ContainerStarted","Data":"305a87c7d6b69fc2f6a05f4bd5cdfb51137d67e8451dac9329d6b2af7766413d"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.736290 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fjv78"] Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.740775 4982 generic.go:334] "Generic (PLEG): container finished" podID="d75afd15-dabd-4b90-845e-288d468a0270" containerID="6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957" exitCode=0 Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.740880 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48448" event={"ID":"d75afd15-dabd-4b90-845e-288d468a0270","Type":"ContainerDied","Data":"6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.740901 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48448" event={"ID":"d75afd15-dabd-4b90-845e-288d468a0270","Type":"ContainerStarted","Data":"c24a478a58a8e87430c695ce8377e31580f330388c9846ff6b5161792618e010"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.759345 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fjv78"] Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.761098 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.015076262 podStartE2EDuration="11.761079516s" podCreationTimestamp="2026-02-24 15:16:05 +0000 UTC" firstStartedPulling="2026-02-24 15:16:07.276990526 +0000 UTC m=+1628.896049019" lastFinishedPulling="2026-02-24 15:16:15.02299376 +0000 UTC m=+1636.642052273" observedRunningTime="2026-02-24 15:16:16.754040282 +0000 UTC m=+1638.373098785" watchObservedRunningTime="2026-02-24 15:16:16.761079516 +0000 UTC m=+1638.380138009" Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.772735 4982 generic.go:334] "Generic (PLEG): container finished" podID="0a6b69b0-ea04-45f2-9961-b3c44fec32b6" containerID="0792e97b55948427a1e8ad4017199e9342c5e561f1b95377276378d9b189fb6a" exitCode=0 Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.772851 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h76wg" event={"ID":"0a6b69b0-ea04-45f2-9961-b3c44fec32b6","Type":"ContainerDied","Data":"0792e97b55948427a1e8ad4017199e9342c5e561f1b95377276378d9b189fb6a"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.785631 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3","Type":"ContainerStarted","Data":"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.785678 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3","Type":"ContainerStarted","Data":"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c"} Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.785798 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerName="nova-metadata-log" containerID="cri-o://b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c" gracePeriod=30 Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.786051 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerName="nova-metadata-metadata" containerID="cri-o://552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987" gracePeriod=30 Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.810803 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.064778957 podStartE2EDuration="11.810786642s" podCreationTimestamp="2026-02-24 15:16:05 +0000 UTC" firstStartedPulling="2026-02-24 15:16:07.273864574 +0000 UTC m=+1628.892923067" lastFinishedPulling="2026-02-24 15:16:15.019872259 +0000 UTC m=+1636.638930752" observedRunningTime="2026-02-24 15:16:16.797443514 +0000 UTC m=+1638.416502007" watchObservedRunningTime="2026-02-24 15:16:16.810786642 +0000 UTC m=+1638.429845135" Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.862551 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.582493335 podStartE2EDuration="10.862530971s" podCreationTimestamp="2026-02-24 15:16:06 +0000 UTC" firstStartedPulling="2026-02-24 15:16:07.745350487 +0000 UTC m=+1629.364408970" lastFinishedPulling="2026-02-24 15:16:15.025388103 +0000 UTC m=+1636.644446606" observedRunningTime="2026-02-24 15:16:16.858622899 +0000 UTC m=+1638.477681412" watchObservedRunningTime="2026-02-24 15:16:16.862530971 +0000 UTC m=+1638.481589454" Feb 24 15:16:16 crc kubenswrapper[4982]: I0224 15:16:16.919705 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.532400989 podStartE2EDuration="11.919685641s" podCreationTimestamp="2026-02-24 15:16:05 +0000 UTC" firstStartedPulling="2026-02-24 15:16:07.721576947 +0000 UTC m=+1629.340635440" lastFinishedPulling="2026-02-24 15:16:15.108861599 +0000 UTC m=+1636.727920092" observedRunningTime="2026-02-24 15:16:16.906908688 +0000 UTC m=+1638.525967171" watchObservedRunningTime="2026-02-24 15:16:16.919685641 +0000 UTC m=+1638.538744124" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.170962 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29128286-55dc-4660-915e-21aed6bacdbe" path="/var/lib/kubelet/pods/29128286-55dc-4660-915e-21aed6bacdbe/volumes" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.419462 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.547580 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-swift-storage-0\") pod \"6836c061-e1c3-4897-824e-175a86614fad\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.547833 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-sb\") pod \"6836c061-e1c3-4897-824e-175a86614fad\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.547891 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjmv2\" (UniqueName: \"kubernetes.io/projected/6836c061-e1c3-4897-824e-175a86614fad-kube-api-access-tjmv2\") pod \"6836c061-e1c3-4897-824e-175a86614fad\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.547940 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-config\") pod \"6836c061-e1c3-4897-824e-175a86614fad\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.547982 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-nb\") pod \"6836c061-e1c3-4897-824e-175a86614fad\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.548183 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-svc\") pod \"6836c061-e1c3-4897-824e-175a86614fad\" (UID: \"6836c061-e1c3-4897-824e-175a86614fad\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.568172 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6836c061-e1c3-4897-824e-175a86614fad-kube-api-access-tjmv2" (OuterVolumeSpecName: "kube-api-access-tjmv2") pod "6836c061-e1c3-4897-824e-175a86614fad" (UID: "6836c061-e1c3-4897-824e-175a86614fad"). InnerVolumeSpecName "kube-api-access-tjmv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.627325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-config" (OuterVolumeSpecName: "config") pod "6836c061-e1c3-4897-824e-175a86614fad" (UID: "6836c061-e1c3-4897-824e-175a86614fad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.654128 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjmv2\" (UniqueName: \"kubernetes.io/projected/6836c061-e1c3-4897-824e-175a86614fad-kube-api-access-tjmv2\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.654168 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.664991 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6836c061-e1c3-4897-824e-175a86614fad" (UID: "6836c061-e1c3-4897-824e-175a86614fad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.682901 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6836c061-e1c3-4897-824e-175a86614fad" (UID: "6836c061-e1c3-4897-824e-175a86614fad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.734487 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6836c061-e1c3-4897-824e-175a86614fad" (UID: "6836c061-e1c3-4897-824e-175a86614fad"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.746051 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.757366 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.757569 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.757661 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.775130 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6836c061-e1c3-4897-824e-175a86614fad" (UID: "6836c061-e1c3-4897-824e-175a86614fad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.801959 4982 generic.go:334] "Generic (PLEG): container finished" podID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerID="552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987" exitCode=0 Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.801985 4982 generic.go:334] "Generic (PLEG): container finished" podID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerID="b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c" exitCode=143 Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.802016 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3","Type":"ContainerDied","Data":"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987"} Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.802040 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3","Type":"ContainerDied","Data":"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c"} Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.802050 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3","Type":"ContainerDied","Data":"251585d72c03e5d86b429375a68eb95691e085fb33d8db424cad6fdba2e0eac4"} Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.802066 4982 scope.go:117] "RemoveContainer" containerID="552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.802189 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.826521 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.828832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sj6rr" event={"ID":"6836c061-e1c3-4897-824e-175a86614fad","Type":"ContainerDied","Data":"5b48d0ce4ad4fb023763cc4409fb598b9ed39b463bead7287015d0353ee4fc7c"} Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.859207 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-combined-ca-bundle\") pod \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.859312 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-config-data\") pod \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.859698 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-logs\") pod \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.859777 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmslc\" (UniqueName: \"kubernetes.io/projected/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-kube-api-access-nmslc\") pod \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\" (UID: \"e1eeb8a0-7f3a-4a29-9924-97b060b62ea3\") " Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.860635 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-logs" (OuterVolumeSpecName: "logs") pod "e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" (UID: "e1eeb8a0-7f3a-4a29-9924-97b060b62ea3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.861234 4982 scope.go:117] "RemoveContainer" containerID="b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.862111 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.862134 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6836c061-e1c3-4897-824e-175a86614fad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.873434 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-kube-api-access-nmslc" (OuterVolumeSpecName: "kube-api-access-nmslc") pod "e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" (UID: "e1eeb8a0-7f3a-4a29-9924-97b060b62ea3"). InnerVolumeSpecName "kube-api-access-nmslc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.896455 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sj6rr"] Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.901953 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" (UID: "e1eeb8a0-7f3a-4a29-9924-97b060b62ea3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.922353 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sj6rr"] Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.936789 4982 scope.go:117] "RemoveContainer" containerID="552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987" Feb 24 15:16:17 crc kubenswrapper[4982]: E0224 15:16:17.937255 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987\": container with ID starting with 552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987 not found: ID does not exist" containerID="552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.937283 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987"} err="failed to get container status \"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987\": rpc error: code = NotFound desc = could not find container \"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987\": container with ID starting with 552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987 not found: ID does not exist" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.937302 4982 scope.go:117] "RemoveContainer" containerID="b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c" Feb 24 15:16:17 crc kubenswrapper[4982]: E0224 15:16:17.937580 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c\": container with ID starting with b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c not found: ID does not exist" containerID="b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.937601 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c"} err="failed to get container status \"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c\": rpc error: code = NotFound desc = could not find container \"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c\": container with ID starting with b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c not found: ID does not exist" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.937619 4982 scope.go:117] "RemoveContainer" containerID="552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.938675 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987"} err="failed to get container status \"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987\": rpc error: code = NotFound desc = could not find container \"552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987\": container with ID starting with 552c7668040513bda3d6ba154671ec025e50b62bb8961614faa2b57010a92987 not found: ID does not exist" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.938780 4982 scope.go:117] "RemoveContainer" containerID="b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.939276 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c"} err="failed to get container status \"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c\": rpc error: code = NotFound desc = could not find container \"b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c\": container with ID starting with b24b5fc8081fc9ab776588fd59e0467a2fab16c2ba88bd882d58d776f408fa9c not found: ID does not exist" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.939355 4982 scope.go:117] "RemoveContainer" containerID="edfd0654105e29eb0825e12168991b1cce44e29e78dac9540577844d74df8c95" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.959390 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-config-data" (OuterVolumeSpecName: "config-data") pod "e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" (UID: "e1eeb8a0-7f3a-4a29-9924-97b060b62ea3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.964478 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.964540 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:17 crc kubenswrapper[4982]: I0224 15:16:17.964552 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmslc\" (UniqueName: \"kubernetes.io/projected/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3-kube-api-access-nmslc\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.091653 4982 scope.go:117] "RemoveContainer" containerID="a758475c96814da78dcd8ee5cf848787db8f6859f1fe63d0e6e109b1f815996c" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.222478 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.237379 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.249208 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:18 crc kubenswrapper[4982]: E0224 15:16:18.249858 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerName="nova-metadata-log" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.249884 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerName="nova-metadata-log" Feb 24 15:16:18 crc kubenswrapper[4982]: E0224 15:16:18.249917 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="extract-utilities" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.249925 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="extract-utilities" Feb 24 15:16:18 crc kubenswrapper[4982]: E0224 15:16:18.249950 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="registry-server" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.249957 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="registry-server" Feb 24 15:16:18 crc kubenswrapper[4982]: E0224 15:16:18.249976 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6836c061-e1c3-4897-824e-175a86614fad" containerName="dnsmasq-dns" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.249983 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6836c061-e1c3-4897-824e-175a86614fad" containerName="dnsmasq-dns" Feb 24 15:16:18 crc kubenswrapper[4982]: E0224 15:16:18.249996 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="extract-content" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.250002 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="extract-content" Feb 24 15:16:18 crc kubenswrapper[4982]: E0224 15:16:18.250017 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6836c061-e1c3-4897-824e-175a86614fad" containerName="init" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.250025 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6836c061-e1c3-4897-824e-175a86614fad" containerName="init" Feb 24 15:16:18 crc kubenswrapper[4982]: E0224 15:16:18.250046 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerName="nova-metadata-metadata" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.250053 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerName="nova-metadata-metadata" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.250319 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerName="nova-metadata-log" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.250336 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" containerName="nova-metadata-metadata" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.250350 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6836c061-e1c3-4897-824e-175a86614fad" containerName="dnsmasq-dns" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.250367 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="29128286-55dc-4660-915e-21aed6bacdbe" containerName="registry-server" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.252805 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.258898 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.259120 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.259374 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.385101 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-config-data\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.385195 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.385612 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e691872-ca4e-456e-ba0a-e3275aed130d-logs\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.385654 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.385698 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5cdx\" (UniqueName: \"kubernetes.io/projected/2e691872-ca4e-456e-ba0a-e3275aed130d-kube-api-access-z5cdx\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.389303 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.487152 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-scripts\") pod \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.487279 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfj6h\" (UniqueName: \"kubernetes.io/projected/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-kube-api-access-dfj6h\") pod \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.487416 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-config-data\") pod \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.487447 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-combined-ca-bundle\") pod \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\" (UID: \"0a6b69b0-ea04-45f2-9961-b3c44fec32b6\") " Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.487786 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-config-data\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.487837 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.488044 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e691872-ca4e-456e-ba0a-e3275aed130d-logs\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.488067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.488094 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5cdx\" (UniqueName: \"kubernetes.io/projected/2e691872-ca4e-456e-ba0a-e3275aed130d-kube-api-access-z5cdx\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.488745 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e691872-ca4e-456e-ba0a-e3275aed130d-logs\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.492348 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-kube-api-access-dfj6h" (OuterVolumeSpecName: "kube-api-access-dfj6h") pod "0a6b69b0-ea04-45f2-9961-b3c44fec32b6" (UID: "0a6b69b0-ea04-45f2-9961-b3c44fec32b6"). InnerVolumeSpecName "kube-api-access-dfj6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.492469 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-scripts" (OuterVolumeSpecName: "scripts") pod "0a6b69b0-ea04-45f2-9961-b3c44fec32b6" (UID: "0a6b69b0-ea04-45f2-9961-b3c44fec32b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.493688 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.494776 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-config-data\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.495212 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.513431 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5cdx\" (UniqueName: \"kubernetes.io/projected/2e691872-ca4e-456e-ba0a-e3275aed130d-kube-api-access-z5cdx\") pod \"nova-metadata-0\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.523903 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-config-data" (OuterVolumeSpecName: "config-data") pod "0a6b69b0-ea04-45f2-9961-b3c44fec32b6" (UID: "0a6b69b0-ea04-45f2-9961-b3c44fec32b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.539550 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a6b69b0-ea04-45f2-9961-b3c44fec32b6" (UID: "0a6b69b0-ea04-45f2-9961-b3c44fec32b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.590069 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.590102 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.590115 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.590127 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfj6h\" (UniqueName: \"kubernetes.io/projected/0a6b69b0-ea04-45f2-9961-b3c44fec32b6-kube-api-access-dfj6h\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.685635 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.852548 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h76wg" event={"ID":"0a6b69b0-ea04-45f2-9961-b3c44fec32b6","Type":"ContainerDied","Data":"6495d4c84d3c6c0ec53b3cb2d5c2c6f1954bf5f5689dc4791bd89e470efe8518"} Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.852871 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6495d4c84d3c6c0ec53b3cb2d5c2c6f1954bf5f5689dc4791bd89e470efe8518" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.852571 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h76wg" Feb 24 15:16:18 crc kubenswrapper[4982]: I0224 15:16:18.856864 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48448" event={"ID":"d75afd15-dabd-4b90-845e-288d468a0270","Type":"ContainerStarted","Data":"75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7"} Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.073043 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.073274 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerName="nova-api-log" containerID="cri-o://a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d" gracePeriod=30 Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.073873 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerName="nova-api-api" containerID="cri-o://436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206" gracePeriod=30 Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.134560 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.165351 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6836c061-e1c3-4897-824e-175a86614fad" path="/var/lib/kubelet/pods/6836c061-e1c3-4897-824e-175a86614fad/volumes" Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.166059 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1eeb8a0-7f3a-4a29-9924-97b060b62ea3" path="/var/lib/kubelet/pods/e1eeb8a0-7f3a-4a29-9924-97b060b62ea3/volumes" Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.178809 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.179045 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c96261c8-fbcb-4f93-8b64-606352364faf" containerName="nova-scheduler-scheduler" containerID="cri-o://305a87c7d6b69fc2f6a05f4bd5cdfb51137d67e8451dac9329d6b2af7766413d" gracePeriod=30 Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.259982 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.870072 4982 generic.go:334] "Generic (PLEG): container finished" podID="d75afd15-dabd-4b90-845e-288d468a0270" containerID="75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7" exitCode=0 Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.870140 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48448" event={"ID":"d75afd15-dabd-4b90-845e-288d468a0270","Type":"ContainerDied","Data":"75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7"} Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.873718 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e691872-ca4e-456e-ba0a-e3275aed130d","Type":"ContainerStarted","Data":"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79"} Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.873771 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e691872-ca4e-456e-ba0a-e3275aed130d","Type":"ContainerStarted","Data":"8716c3b4432d5df87ba4a865002fccf0a75f2f803b4b29071e81db994a07f2ca"} Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.880839 4982 generic.go:334] "Generic (PLEG): container finished" podID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerID="a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d" exitCode=143 Feb 24 15:16:19 crc kubenswrapper[4982]: I0224 15:16:19.880891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef","Type":"ContainerDied","Data":"a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d"} Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.475486 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.573656 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-combined-ca-bundle\") pod \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.573870 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-logs\") pod \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.573981 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-config-data\") pod \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.574085 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mx8v\" (UniqueName: \"kubernetes.io/projected/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-kube-api-access-8mx8v\") pod \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\" (UID: \"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef\") " Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.574447 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-logs" (OuterVolumeSpecName: "logs") pod "e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" (UID: "e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.575226 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.580743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-kube-api-access-8mx8v" (OuterVolumeSpecName: "kube-api-access-8mx8v") pod "e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" (UID: "e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef"). InnerVolumeSpecName "kube-api-access-8mx8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.612466 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" (UID: "e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.635237 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-config-data" (OuterVolumeSpecName: "config-data") pod "e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" (UID: "e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.677345 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.677381 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mx8v\" (UniqueName: \"kubernetes.io/projected/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-kube-api-access-8mx8v\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.677391 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.894756 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e691872-ca4e-456e-ba0a-e3275aed130d","Type":"ContainerStarted","Data":"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282"} Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.894912 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerName="nova-metadata-log" containerID="cri-o://9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79" gracePeriod=30 Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.894975 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerName="nova-metadata-metadata" containerID="cri-o://1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282" gracePeriod=30 Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.902726 4982 generic.go:334] "Generic (PLEG): container finished" podID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerID="436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206" exitCode=0 Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.902791 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef","Type":"ContainerDied","Data":"436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206"} Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.902818 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef","Type":"ContainerDied","Data":"1db044bd2b50f2cedac484febc3f9d3f2e63d0686bda714ce59f2cbef9eff96e"} Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.902835 4982 scope.go:117] "RemoveContainer" containerID="436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.903004 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.917450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48448" event={"ID":"d75afd15-dabd-4b90-845e-288d468a0270","Type":"ContainerStarted","Data":"fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7"} Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.923281 4982 generic.go:334] "Generic (PLEG): container finished" podID="e3f1556f-4320-4c99-9296-8526ded51204" containerID="13af9be5ac39e1a78805e550bdd61153f48819b38ad63336178d6bc9d30683ce" exitCode=0 Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.923334 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zb79k" event={"ID":"e3f1556f-4320-4c99-9296-8526ded51204","Type":"ContainerDied","Data":"13af9be5ac39e1a78805e550bdd61153f48819b38ad63336178d6bc9d30683ce"} Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.939879 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.939842815 podStartE2EDuration="2.939842815s" podCreationTimestamp="2026-02-24 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:20.913382785 +0000 UTC m=+1642.532441268" watchObservedRunningTime="2026-02-24 15:16:20.939842815 +0000 UTC m=+1642.558901328" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.950393 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-48448" podStartSLOduration=2.396768589 podStartE2EDuration="5.950373689s" podCreationTimestamp="2026-02-24 15:16:15 +0000 UTC" firstStartedPulling="2026-02-24 15:16:16.759041493 +0000 UTC m=+1638.378099986" lastFinishedPulling="2026-02-24 15:16:20.312646593 +0000 UTC m=+1641.931705086" observedRunningTime="2026-02-24 15:16:20.938911421 +0000 UTC m=+1642.557969914" watchObservedRunningTime="2026-02-24 15:16:20.950373689 +0000 UTC m=+1642.569432192" Feb 24 15:16:20 crc kubenswrapper[4982]: I0224 15:16:20.953489 4982 scope.go:117] "RemoveContainer" containerID="a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.009893 4982 scope.go:117] "RemoveContainer" containerID="436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206" Feb 24 15:16:21 crc kubenswrapper[4982]: E0224 15:16:21.012935 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206\": container with ID starting with 436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206 not found: ID does not exist" containerID="436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.012992 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206"} err="failed to get container status \"436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206\": rpc error: code = NotFound desc = could not find container \"436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206\": container with ID starting with 436cf540ac81a8afcfe0ed2f1fd44c3c90bf430ab1f1664bfea282ea994b0206 not found: ID does not exist" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.013020 4982 scope.go:117] "RemoveContainer" containerID="a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d" Feb 24 15:16:21 crc kubenswrapper[4982]: E0224 15:16:21.013446 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d\": container with ID starting with a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d not found: ID does not exist" containerID="a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.013474 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d"} err="failed to get container status \"a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d\": rpc error: code = NotFound desc = could not find container \"a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d\": container with ID starting with a03f550c55780794470ab53b69497c89645ca2ed121165f2c53bfb68075f419d not found: ID does not exist" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.024902 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.038945 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.053426 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:21 crc kubenswrapper[4982]: E0224 15:16:21.055820 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerName="nova-api-log" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.055839 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerName="nova-api-log" Feb 24 15:16:21 crc kubenswrapper[4982]: E0224 15:16:21.055851 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerName="nova-api-api" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.055858 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerName="nova-api-api" Feb 24 15:16:21 crc kubenswrapper[4982]: E0224 15:16:21.055868 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6b69b0-ea04-45f2-9961-b3c44fec32b6" containerName="nova-manage" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.055876 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6b69b0-ea04-45f2-9961-b3c44fec32b6" containerName="nova-manage" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.056147 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerName="nova-api-api" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.056165 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a6b69b0-ea04-45f2-9961-b3c44fec32b6" containerName="nova-manage" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.056174 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" containerName="nova-api-log" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.063871 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.065069 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.066490 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.098906 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.172266 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef" path="/var/lib/kubelet/pods/e1fe5cec-0fbe-4f2d-a412-5ab9637e61ef/volumes" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.196411 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.196511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qnq\" (UniqueName: \"kubernetes.io/projected/f258a706-22fa-4b6f-9c9b-51e60b78d70a-kube-api-access-w6qnq\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.196609 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f258a706-22fa-4b6f-9c9b-51e60b78d70a-logs\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.196833 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-config-data\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.299409 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f258a706-22fa-4b6f-9c9b-51e60b78d70a-logs\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.299803 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-config-data\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.299981 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.300118 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6qnq\" (UniqueName: \"kubernetes.io/projected/f258a706-22fa-4b6f-9c9b-51e60b78d70a-kube-api-access-w6qnq\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.300114 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f258a706-22fa-4b6f-9c9b-51e60b78d70a-logs\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.307237 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-config-data\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.317252 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.323470 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6qnq\" (UniqueName: \"kubernetes.io/projected/f258a706-22fa-4b6f-9c9b-51e60b78d70a-kube-api-access-w6qnq\") pod \"nova-api-0\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.477140 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.510334 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.536332 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.607637 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-nova-metadata-tls-certs\") pod \"2e691872-ca4e-456e-ba0a-e3275aed130d\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.607711 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5cdx\" (UniqueName: \"kubernetes.io/projected/2e691872-ca4e-456e-ba0a-e3275aed130d-kube-api-access-z5cdx\") pod \"2e691872-ca4e-456e-ba0a-e3275aed130d\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.607775 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-config-data\") pod \"2e691872-ca4e-456e-ba0a-e3275aed130d\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.607802 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e691872-ca4e-456e-ba0a-e3275aed130d-logs\") pod \"2e691872-ca4e-456e-ba0a-e3275aed130d\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.607937 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-combined-ca-bundle\") pod \"2e691872-ca4e-456e-ba0a-e3275aed130d\" (UID: \"2e691872-ca4e-456e-ba0a-e3275aed130d\") " Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.608194 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e691872-ca4e-456e-ba0a-e3275aed130d-logs" (OuterVolumeSpecName: "logs") pod "2e691872-ca4e-456e-ba0a-e3275aed130d" (UID: "2e691872-ca4e-456e-ba0a-e3275aed130d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.608673 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e691872-ca4e-456e-ba0a-e3275aed130d-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.614736 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e691872-ca4e-456e-ba0a-e3275aed130d-kube-api-access-z5cdx" (OuterVolumeSpecName: "kube-api-access-z5cdx") pod "2e691872-ca4e-456e-ba0a-e3275aed130d" (UID: "2e691872-ca4e-456e-ba0a-e3275aed130d"). InnerVolumeSpecName "kube-api-access-z5cdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.664690 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-config-data" (OuterVolumeSpecName: "config-data") pod "2e691872-ca4e-456e-ba0a-e3275aed130d" (UID: "2e691872-ca4e-456e-ba0a-e3275aed130d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.670643 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e691872-ca4e-456e-ba0a-e3275aed130d" (UID: "2e691872-ca4e-456e-ba0a-e3275aed130d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.678050 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2e691872-ca4e-456e-ba0a-e3275aed130d" (UID: "2e691872-ca4e-456e-ba0a-e3275aed130d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.710842 4982 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.711193 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5cdx\" (UniqueName: \"kubernetes.io/projected/2e691872-ca4e-456e-ba0a-e3275aed130d-kube-api-access-z5cdx\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.711286 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.711371 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e691872-ca4e-456e-ba0a-e3275aed130d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.935189 4982 generic.go:334] "Generic (PLEG): container finished" podID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerID="1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282" exitCode=0 Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.935231 4982 generic.go:334] "Generic (PLEG): container finished" podID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerID="9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79" exitCode=143 Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.935243 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.935292 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e691872-ca4e-456e-ba0a-e3275aed130d","Type":"ContainerDied","Data":"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282"} Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.935353 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e691872-ca4e-456e-ba0a-e3275aed130d","Type":"ContainerDied","Data":"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79"} Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.935392 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e691872-ca4e-456e-ba0a-e3275aed130d","Type":"ContainerDied","Data":"8716c3b4432d5df87ba4a865002fccf0a75f2f803b4b29071e81db994a07f2ca"} Feb 24 15:16:21 crc kubenswrapper[4982]: I0224 15:16:21.935414 4982 scope.go:117] "RemoveContainer" containerID="1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.027711 4982 scope.go:117] "RemoveContainer" containerID="9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.071769 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.088742 4982 scope.go:117] "RemoveContainer" containerID="1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282" Feb 24 15:16:22 crc kubenswrapper[4982]: E0224 15:16:22.089294 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282\": container with ID starting with 1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282 not found: ID does not exist" containerID="1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.089323 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282"} err="failed to get container status \"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282\": rpc error: code = NotFound desc = could not find container \"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282\": container with ID starting with 1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282 not found: ID does not exist" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.089347 4982 scope.go:117] "RemoveContainer" containerID="9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79" Feb 24 15:16:22 crc kubenswrapper[4982]: E0224 15:16:22.089610 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79\": container with ID starting with 9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79 not found: ID does not exist" containerID="9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.089647 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79"} err="failed to get container status \"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79\": rpc error: code = NotFound desc = could not find container \"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79\": container with ID starting with 9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79 not found: ID does not exist" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.089663 4982 scope.go:117] "RemoveContainer" containerID="1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.089980 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282"} err="failed to get container status \"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282\": rpc error: code = NotFound desc = could not find container \"1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282\": container with ID starting with 1eed50cece5eca85ea7e97d4194d1f5db44fee755d8153ed8de5f409f9bbf282 not found: ID does not exist" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.089999 4982 scope.go:117] "RemoveContainer" containerID="9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.090257 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79"} err="failed to get container status \"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79\": rpc error: code = NotFound desc = could not find container \"9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79\": container with ID starting with 9e6f79d195737b98a2e0dbac48fe083cd55194da11b216a3d954d77dbf692e79 not found: ID does not exist" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.092452 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.108709 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.120726 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:22 crc kubenswrapper[4982]: E0224 15:16:22.121294 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerName="nova-metadata-metadata" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.121318 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerName="nova-metadata-metadata" Feb 24 15:16:22 crc kubenswrapper[4982]: E0224 15:16:22.121368 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerName="nova-metadata-log" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.121377 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerName="nova-metadata-log" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.121698 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerName="nova-metadata-metadata" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.121731 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" containerName="nova-metadata-log" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.123762 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.128478 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.129367 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.133285 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.221643 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.221694 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.221763 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-config-data\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.221789 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea0a117-d934-440d-87d6-72f077c38029-logs\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.221840 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96fl\" (UniqueName: \"kubernetes.io/projected/8ea0a117-d934-440d-87d6-72f077c38029-kube-api-access-m96fl\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.323596 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.323993 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.324085 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-config-data\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.324119 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea0a117-d934-440d-87d6-72f077c38029-logs\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.324164 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96fl\" (UniqueName: \"kubernetes.io/projected/8ea0a117-d934-440d-87d6-72f077c38029-kube-api-access-m96fl\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.324729 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea0a117-d934-440d-87d6-72f077c38029-logs\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.332802 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.333998 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-config-data\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.334921 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.342372 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96fl\" (UniqueName: \"kubernetes.io/projected/8ea0a117-d934-440d-87d6-72f077c38029-kube-api-access-m96fl\") pod \"nova-metadata-0\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.440943 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.528573 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-config-data\") pod \"e3f1556f-4320-4c99-9296-8526ded51204\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.528728 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-scripts\") pod \"e3f1556f-4320-4c99-9296-8526ded51204\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.528908 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnr8r\" (UniqueName: \"kubernetes.io/projected/e3f1556f-4320-4c99-9296-8526ded51204-kube-api-access-wnr8r\") pod \"e3f1556f-4320-4c99-9296-8526ded51204\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.529070 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-combined-ca-bundle\") pod \"e3f1556f-4320-4c99-9296-8526ded51204\" (UID: \"e3f1556f-4320-4c99-9296-8526ded51204\") " Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.532585 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-scripts" (OuterVolumeSpecName: "scripts") pod "e3f1556f-4320-4c99-9296-8526ded51204" (UID: "e3f1556f-4320-4c99-9296-8526ded51204"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.533905 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f1556f-4320-4c99-9296-8526ded51204-kube-api-access-wnr8r" (OuterVolumeSpecName: "kube-api-access-wnr8r") pod "e3f1556f-4320-4c99-9296-8526ded51204" (UID: "e3f1556f-4320-4c99-9296-8526ded51204"). InnerVolumeSpecName "kube-api-access-wnr8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.554403 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.596035 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3f1556f-4320-4c99-9296-8526ded51204" (UID: "e3f1556f-4320-4c99-9296-8526ded51204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.596081 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-config-data" (OuterVolumeSpecName: "config-data") pod "e3f1556f-4320-4c99-9296-8526ded51204" (UID: "e3f1556f-4320-4c99-9296-8526ded51204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.631762 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.631797 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.631806 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnr8r\" (UniqueName: \"kubernetes.io/projected/e3f1556f-4320-4c99-9296-8526ded51204-kube-api-access-wnr8r\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.631816 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f1556f-4320-4c99-9296-8526ded51204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.965868 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zb79k" event={"ID":"e3f1556f-4320-4c99-9296-8526ded51204","Type":"ContainerDied","Data":"47b1244f18166bda5f82a6dbc6eca93925a683ac4d577f46638900f578ba8497"} Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.966209 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47b1244f18166bda5f82a6dbc6eca93925a683ac4d577f46638900f578ba8497" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.966273 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zb79k" Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.979608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f258a706-22fa-4b6f-9c9b-51e60b78d70a","Type":"ContainerStarted","Data":"90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1"} Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.979654 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f258a706-22fa-4b6f-9c9b-51e60b78d70a","Type":"ContainerStarted","Data":"28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0"} Feb 24 15:16:22 crc kubenswrapper[4982]: I0224 15:16:22.979663 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f258a706-22fa-4b6f-9c9b-51e60b78d70a","Type":"ContainerStarted","Data":"c657341f56b09c4ae7d6a07bd3bd3748e478c4c20c4de4601dc379bda054ef1b"} Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.035696 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.035676858 podStartE2EDuration="3.035676858s" podCreationTimestamp="2026-02-24 15:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:23.017565826 +0000 UTC m=+1644.636624339" watchObservedRunningTime="2026-02-24 15:16:23.035676858 +0000 UTC m=+1644.654735351" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.160905 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e691872-ca4e-456e-ba0a-e3275aed130d" path="/var/lib/kubelet/pods/2e691872-ca4e-456e-ba0a-e3275aed130d/volumes" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.227078 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.951878 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 24 15:16:23 crc kubenswrapper[4982]: E0224 15:16:23.952997 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f1556f-4320-4c99-9296-8526ded51204" containerName="aodh-db-sync" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.953022 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f1556f-4320-4c99-9296-8526ded51204" containerName="aodh-db-sync" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.953302 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f1556f-4320-4c99-9296-8526ded51204" containerName="aodh-db-sync" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.955697 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.957240 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.957912 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-2qctw" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.958663 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 24 15:16:23 crc kubenswrapper[4982]: I0224 15:16:23.970999 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.014551 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea0a117-d934-440d-87d6-72f077c38029","Type":"ContainerStarted","Data":"18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4"} Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.014712 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea0a117-d934-440d-87d6-72f077c38029","Type":"ContainerStarted","Data":"47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3"} Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.014803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea0a117-d934-440d-87d6-72f077c38029","Type":"ContainerStarted","Data":"ed06f96391d92a2118d91c86e5edf39172a58d2963baf6d5d35c395cad9904fd"} Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.040738 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.040714571 podStartE2EDuration="2.040714571s" podCreationTimestamp="2026-02-24 15:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:24.030475844 +0000 UTC m=+1645.649534337" watchObservedRunningTime="2026-02-24 15:16:24.040714571 +0000 UTC m=+1645.659773064" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.075529 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-config-data\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.076218 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5c2\" (UniqueName: \"kubernetes.io/projected/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-kube-api-access-cd5c2\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.076365 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-scripts\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.076470 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.179171 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-config-data\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.179677 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd5c2\" (UniqueName: \"kubernetes.io/projected/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-kube-api-access-cd5c2\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.179738 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-scripts\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.179790 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.188569 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.193337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-scripts\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.195080 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-config-data\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.204295 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd5c2\" (UniqueName: \"kubernetes.io/projected/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-kube-api-access-cd5c2\") pod \"aodh-0\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.318128 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.663549 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 24 15:16:24 crc kubenswrapper[4982]: I0224 15:16:24.840230 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 24 15:16:25 crc kubenswrapper[4982]: I0224 15:16:25.023890 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerStarted","Data":"fd7cb0fbd50da4c280dfc2b1bcca86a5b16d298035da1be5bf8f43230d8ce0fb"} Feb 24 15:16:25 crc kubenswrapper[4982]: I0224 15:16:25.640695 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:25 crc kubenswrapper[4982]: I0224 15:16:25.641702 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:25 crc kubenswrapper[4982]: I0224 15:16:25.705816 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.093313 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.156692 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48448"] Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.345764 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-h5bts" podUID="94c42884-373a-42f1-91f4-1949f4a8fbe8" containerName="registry-server" probeResult="failure" output=< Feb 24 15:16:26 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:16:26 crc kubenswrapper[4982]: > Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.373383 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-h5bts" podUID="94c42884-373a-42f1-91f4-1949f4a8fbe8" containerName="registry-server" probeResult="failure" output=< Feb 24 15:16:26 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:16:26 crc kubenswrapper[4982]: > Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.903468 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.904088 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="ceilometer-central-agent" containerID="cri-o://ff61aad5c6ce7cd609d8035ca17027d2690a6576ebe0a88490ad955db249061f" gracePeriod=30 Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.904140 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="proxy-httpd" containerID="cri-o://007d9578125210c050d09ee07e5b12ac9640a3260058d0197df155c73df0d661" gracePeriod=30 Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.904169 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="sg-core" containerID="cri-o://b9d20b5e02afe1ff6e63dff4b18c545b8a475da29835b5a5e3e588ee5be0cdc9" gracePeriod=30 Feb 24 15:16:26 crc kubenswrapper[4982]: I0224 15:16:26.904246 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="ceilometer-notification-agent" containerID="cri-o://8373f31df7bfed8db4569cd59accdb15bfec8d9d70c2ec1e6085e25851fc8e2b" gracePeriod=30 Feb 24 15:16:27 crc kubenswrapper[4982]: I0224 15:16:27.051245 4982 generic.go:334] "Generic (PLEG): container finished" podID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerID="007d9578125210c050d09ee07e5b12ac9640a3260058d0197df155c73df0d661" exitCode=0 Feb 24 15:16:27 crc kubenswrapper[4982]: I0224 15:16:27.051281 4982 generic.go:334] "Generic (PLEG): container finished" podID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerID="b9d20b5e02afe1ff6e63dff4b18c545b8a475da29835b5a5e3e588ee5be0cdc9" exitCode=2 Feb 24 15:16:27 crc kubenswrapper[4982]: I0224 15:16:27.051319 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerDied","Data":"007d9578125210c050d09ee07e5b12ac9640a3260058d0197df155c73df0d661"} Feb 24 15:16:27 crc kubenswrapper[4982]: I0224 15:16:27.051390 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerDied","Data":"b9d20b5e02afe1ff6e63dff4b18c545b8a475da29835b5a5e3e588ee5be0cdc9"} Feb 24 15:16:27 crc kubenswrapper[4982]: I0224 15:16:27.054030 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerStarted","Data":"8b4b4b11ec728b191fe67baf54fb0cb8e34e5e37007ad18bf768df8e3e304e0d"} Feb 24 15:16:27 crc kubenswrapper[4982]: I0224 15:16:27.460270 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 24 15:16:27 crc kubenswrapper[4982]: I0224 15:16:27.554870 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 15:16:27 crc kubenswrapper[4982]: I0224 15:16:27.554922 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.067381 4982 generic.go:334] "Generic (PLEG): container finished" podID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerID="ff61aad5c6ce7cd609d8035ca17027d2690a6576ebe0a88490ad955db249061f" exitCode=0 Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.067435 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerDied","Data":"ff61aad5c6ce7cd609d8035ca17027d2690a6576ebe0a88490ad955db249061f"} Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.069590 4982 generic.go:334] "Generic (PLEG): container finished" podID="c95d73d5-4913-4638-bfa1-fd9c7539ed88" containerID="8c5182fdc9f626ddde9dd9cd0a446a8c1ddb70a8d9a38a9e21ae823f7c7d02fd" exitCode=0 Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.069673 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wd76l" event={"ID":"c95d73d5-4913-4638-bfa1-fd9c7539ed88","Type":"ContainerDied","Data":"8c5182fdc9f626ddde9dd9cd0a446a8c1ddb70a8d9a38a9e21ae823f7c7d02fd"} Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.069847 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-48448" podUID="d75afd15-dabd-4b90-845e-288d468a0270" containerName="registry-server" containerID="cri-o://fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7" gracePeriod=2 Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.671262 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.711813 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-utilities\") pod \"d75afd15-dabd-4b90-845e-288d468a0270\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.712647 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwk66\" (UniqueName: \"kubernetes.io/projected/d75afd15-dabd-4b90-845e-288d468a0270-kube-api-access-cwk66\") pod \"d75afd15-dabd-4b90-845e-288d468a0270\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.713746 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-catalog-content\") pod \"d75afd15-dabd-4b90-845e-288d468a0270\" (UID: \"d75afd15-dabd-4b90-845e-288d468a0270\") " Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.714374 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-utilities" (OuterVolumeSpecName: "utilities") pod "d75afd15-dabd-4b90-845e-288d468a0270" (UID: "d75afd15-dabd-4b90-845e-288d468a0270"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.716965 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.735821 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75afd15-dabd-4b90-845e-288d468a0270-kube-api-access-cwk66" (OuterVolumeSpecName: "kube-api-access-cwk66") pod "d75afd15-dabd-4b90-845e-288d468a0270" (UID: "d75afd15-dabd-4b90-845e-288d468a0270"). InnerVolumeSpecName "kube-api-access-cwk66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.748235 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d75afd15-dabd-4b90-845e-288d468a0270" (UID: "d75afd15-dabd-4b90-845e-288d468a0270"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.819880 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwk66\" (UniqueName: \"kubernetes.io/projected/d75afd15-dabd-4b90-845e-288d468a0270-kube-api-access-cwk66\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:28 crc kubenswrapper[4982]: I0224 15:16:28.819925 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d75afd15-dabd-4b90-845e-288d468a0270-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.085814 4982 generic.go:334] "Generic (PLEG): container finished" podID="d75afd15-dabd-4b90-845e-288d468a0270" containerID="fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7" exitCode=0 Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.086075 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48448" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.089670 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48448" event={"ID":"d75afd15-dabd-4b90-845e-288d468a0270","Type":"ContainerDied","Data":"fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7"} Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.089715 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48448" event={"ID":"d75afd15-dabd-4b90-845e-288d468a0270","Type":"ContainerDied","Data":"c24a478a58a8e87430c695ce8377e31580f330388c9846ff6b5161792618e010"} Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.089754 4982 scope.go:117] "RemoveContainer" containerID="fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.126745 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48448"] Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.141704 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-48448"] Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.160466 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d75afd15-dabd-4b90-845e-288d468a0270" path="/var/lib/kubelet/pods/d75afd15-dabd-4b90-845e-288d468a0270/volumes" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.261817 4982 scope.go:117] "RemoveContainer" containerID="75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.313480 4982 scope.go:117] "RemoveContainer" containerID="6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.382863 4982 scope.go:117] "RemoveContainer" containerID="fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7" Feb 24 15:16:29 crc kubenswrapper[4982]: E0224 15:16:29.384250 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7\": container with ID starting with fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7 not found: ID does not exist" containerID="fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.384287 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7"} err="failed to get container status \"fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7\": rpc error: code = NotFound desc = could not find container \"fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7\": container with ID starting with fc728d92cab61aefacd03c69d42c77682119c46427a357034c72afa6c64084f7 not found: ID does not exist" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.384309 4982 scope.go:117] "RemoveContainer" containerID="75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7" Feb 24 15:16:29 crc kubenswrapper[4982]: E0224 15:16:29.384544 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7\": container with ID starting with 75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7 not found: ID does not exist" containerID="75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.384569 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7"} err="failed to get container status \"75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7\": rpc error: code = NotFound desc = could not find container \"75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7\": container with ID starting with 75670b2adfe33a4a23f67bfaead215f901037a09a5c347eeb2d8824f2a73fac7 not found: ID does not exist" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.384584 4982 scope.go:117] "RemoveContainer" containerID="6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957" Feb 24 15:16:29 crc kubenswrapper[4982]: E0224 15:16:29.384786 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957\": container with ID starting with 6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957 not found: ID does not exist" containerID="6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.384804 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957"} err="failed to get container status \"6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957\": rpc error: code = NotFound desc = could not find container \"6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957\": container with ID starting with 6ccac0195187b2e0eda0e0e1251fde113cb0f041ac54487e7c33ceaf6a3a1957 not found: ID does not exist" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.468708 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.533647 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm96j\" (UniqueName: \"kubernetes.io/projected/c95d73d5-4913-4638-bfa1-fd9c7539ed88-kube-api-access-qm96j\") pod \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.533930 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-scripts\") pod \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.533963 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-combined-ca-bundle\") pod \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.534015 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-config-data\") pod \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\" (UID: \"c95d73d5-4913-4638-bfa1-fd9c7539ed88\") " Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.538079 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-scripts" (OuterVolumeSpecName: "scripts") pod "c95d73d5-4913-4638-bfa1-fd9c7539ed88" (UID: "c95d73d5-4913-4638-bfa1-fd9c7539ed88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.538083 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c95d73d5-4913-4638-bfa1-fd9c7539ed88-kube-api-access-qm96j" (OuterVolumeSpecName: "kube-api-access-qm96j") pod "c95d73d5-4913-4638-bfa1-fd9c7539ed88" (UID: "c95d73d5-4913-4638-bfa1-fd9c7539ed88"). InnerVolumeSpecName "kube-api-access-qm96j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.578963 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c95d73d5-4913-4638-bfa1-fd9c7539ed88" (UID: "c95d73d5-4913-4638-bfa1-fd9c7539ed88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.586851 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-config-data" (OuterVolumeSpecName: "config-data") pod "c95d73d5-4913-4638-bfa1-fd9c7539ed88" (UID: "c95d73d5-4913-4638-bfa1-fd9c7539ed88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.636966 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm96j\" (UniqueName: \"kubernetes.io/projected/c95d73d5-4913-4638-bfa1-fd9c7539ed88-kube-api-access-qm96j\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.637006 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.637019 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:29 crc kubenswrapper[4982]: I0224 15:16:29.637029 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95d73d5-4913-4638-bfa1-fd9c7539ed88-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.099954 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerStarted","Data":"c6d509fc0b84ed176c02a5ddca1095c01e86815fa2b8db93a147d5cdf85730e5"} Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.104413 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wd76l" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.104695 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wd76l" event={"ID":"c95d73d5-4913-4638-bfa1-fd9c7539ed88","Type":"ContainerDied","Data":"aa951bacfb5841f2056c6437d6412a3bad970dc83cee3fc931c2aec13e8e133f"} Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.104866 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa951bacfb5841f2056c6437d6412a3bad970dc83cee3fc931c2aec13e8e133f" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.189423 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 15:16:30 crc kubenswrapper[4982]: E0224 15:16:30.190125 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c95d73d5-4913-4638-bfa1-fd9c7539ed88" containerName="nova-cell1-conductor-db-sync" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.190143 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c95d73d5-4913-4638-bfa1-fd9c7539ed88" containerName="nova-cell1-conductor-db-sync" Feb 24 15:16:30 crc kubenswrapper[4982]: E0224 15:16:30.190157 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d75afd15-dabd-4b90-845e-288d468a0270" containerName="extract-content" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.190163 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75afd15-dabd-4b90-845e-288d468a0270" containerName="extract-content" Feb 24 15:16:30 crc kubenswrapper[4982]: E0224 15:16:30.190179 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d75afd15-dabd-4b90-845e-288d468a0270" containerName="extract-utilities" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.190185 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75afd15-dabd-4b90-845e-288d468a0270" containerName="extract-utilities" Feb 24 15:16:30 crc kubenswrapper[4982]: E0224 15:16:30.190202 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d75afd15-dabd-4b90-845e-288d468a0270" containerName="registry-server" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.190208 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75afd15-dabd-4b90-845e-288d468a0270" containerName="registry-server" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.190433 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c95d73d5-4913-4638-bfa1-fd9c7539ed88" containerName="nova-cell1-conductor-db-sync" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.190456 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d75afd15-dabd-4b90-845e-288d468a0270" containerName="registry-server" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.191235 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.195353 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.209156 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.250383 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c35239-eb72-4ef1-b22a-be4ea3374b3c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.250781 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdtrx\" (UniqueName: \"kubernetes.io/projected/67c35239-eb72-4ef1-b22a-be4ea3374b3c-kube-api-access-fdtrx\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.250947 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c35239-eb72-4ef1-b22a-be4ea3374b3c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.353492 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c35239-eb72-4ef1-b22a-be4ea3374b3c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.353665 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdtrx\" (UniqueName: \"kubernetes.io/projected/67c35239-eb72-4ef1-b22a-be4ea3374b3c-kube-api-access-fdtrx\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.353697 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c35239-eb72-4ef1-b22a-be4ea3374b3c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.359441 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c35239-eb72-4ef1-b22a-be4ea3374b3c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.359720 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c35239-eb72-4ef1-b22a-be4ea3374b3c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.376779 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdtrx\" (UniqueName: \"kubernetes.io/projected/67c35239-eb72-4ef1-b22a-be4ea3374b3c-kube-api-access-fdtrx\") pod \"nova-cell1-conductor-0\" (UID: \"67c35239-eb72-4ef1-b22a-be4ea3374b3c\") " pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: I0224 15:16:30.506227 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:30 crc kubenswrapper[4982]: W0224 15:16:30.984446 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67c35239_eb72_4ef1_b22a_be4ea3374b3c.slice/crio-9f817a251e4cd7309235a888f8b2adf95e9a6b869870d9cc0ed668421700f507 WatchSource:0}: Error finding container 9f817a251e4cd7309235a888f8b2adf95e9a6b869870d9cc0ed668421700f507: Status 404 returned error can't find the container with id 9f817a251e4cd7309235a888f8b2adf95e9a6b869870d9cc0ed668421700f507 Feb 24 15:16:31 crc kubenswrapper[4982]: I0224 15:16:31.001293 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 15:16:31 crc kubenswrapper[4982]: I0224 15:16:31.120610 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"67c35239-eb72-4ef1-b22a-be4ea3374b3c","Type":"ContainerStarted","Data":"9f817a251e4cd7309235a888f8b2adf95e9a6b869870d9cc0ed668421700f507"} Feb 24 15:16:31 crc kubenswrapper[4982]: I0224 15:16:31.511549 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 15:16:31 crc kubenswrapper[4982]: I0224 15:16:31.511837 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.134619 4982 generic.go:334] "Generic (PLEG): container finished" podID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerID="8373f31df7bfed8db4569cd59accdb15bfec8d9d70c2ec1e6085e25851fc8e2b" exitCode=0 Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.134674 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerDied","Data":"8373f31df7bfed8db4569cd59accdb15bfec8d9d70c2ec1e6085e25851fc8e2b"} Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.136787 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"67c35239-eb72-4ef1-b22a-be4ea3374b3c","Type":"ContainerStarted","Data":"0f98650d8b6236ac7bc41311fb48a12ca4e49859dbb9c5d3a7839c5a1fd32864"} Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.138027 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.156314 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.156295452 podStartE2EDuration="2.156295452s" podCreationTimestamp="2026-02-24 15:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:32.15124677 +0000 UTC m=+1653.770305253" watchObservedRunningTime="2026-02-24 15:16:32.156295452 +0000 UTC m=+1653.775353945" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.554865 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.555800 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.593788 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.594177 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.0:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.773327 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.846561 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-run-httpd\") pod \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.846617 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-log-httpd\") pod \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.846763 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-combined-ca-bundle\") pod \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.846799 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-sg-core-conf-yaml\") pod \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.846824 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-config-data\") pod \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.846846 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-scripts\") pod \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.846936 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tw9s\" (UniqueName: \"kubernetes.io/projected/269117fa-6bb8-4cfa-9608-a43e8fef59a9-kube-api-access-7tw9s\") pod \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\" (UID: \"269117fa-6bb8-4cfa-9608-a43e8fef59a9\") " Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.851128 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "269117fa-6bb8-4cfa-9608-a43e8fef59a9" (UID: "269117fa-6bb8-4cfa-9608-a43e8fef59a9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.851357 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "269117fa-6bb8-4cfa-9608-a43e8fef59a9" (UID: "269117fa-6bb8-4cfa-9608-a43e8fef59a9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.853679 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269117fa-6bb8-4cfa-9608-a43e8fef59a9-kube-api-access-7tw9s" (OuterVolumeSpecName: "kube-api-access-7tw9s") pod "269117fa-6bb8-4cfa-9608-a43e8fef59a9" (UID: "269117fa-6bb8-4cfa-9608-a43e8fef59a9"). InnerVolumeSpecName "kube-api-access-7tw9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.866800 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-scripts" (OuterVolumeSpecName: "scripts") pod "269117fa-6bb8-4cfa-9608-a43e8fef59a9" (UID: "269117fa-6bb8-4cfa-9608-a43e8fef59a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.945568 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "269117fa-6bb8-4cfa-9608-a43e8fef59a9" (UID: "269117fa-6bb8-4cfa-9608-a43e8fef59a9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.962668 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.962706 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tw9s\" (UniqueName: \"kubernetes.io/projected/269117fa-6bb8-4cfa-9608-a43e8fef59a9-kube-api-access-7tw9s\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.962720 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.962733 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/269117fa-6bb8-4cfa-9608-a43e8fef59a9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:32 crc kubenswrapper[4982]: I0224 15:16:32.962744 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.027077 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "269117fa-6bb8-4cfa-9608-a43e8fef59a9" (UID: "269117fa-6bb8-4cfa-9608-a43e8fef59a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.041146 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-config-data" (OuterVolumeSpecName: "config-data") pod "269117fa-6bb8-4cfa-9608-a43e8fef59a9" (UID: "269117fa-6bb8-4cfa-9608-a43e8fef59a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.068160 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.068482 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269117fa-6bb8-4cfa-9608-a43e8fef59a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.181106 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.183200 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"269117fa-6bb8-4cfa-9608-a43e8fef59a9","Type":"ContainerDied","Data":"ec8d3bdc842c7050b6ce1a79b823b403eb28a3c323faa23d863d24cc271c21ef"} Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.183262 4982 scope.go:117] "RemoveContainer" containerID="007d9578125210c050d09ee07e5b12ac9640a3260058d0197df155c73df0d661" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.200648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerStarted","Data":"812d1cd686345baf4ab46b6a6e0f1acebd9b5941d9c05682c6b07ffed7e3511d"} Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.252491 4982 scope.go:117] "RemoveContainer" containerID="b9d20b5e02afe1ff6e63dff4b18c545b8a475da29835b5a5e3e588ee5be0cdc9" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.265577 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.283938 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.305218 4982 scope.go:117] "RemoveContainer" containerID="8373f31df7bfed8db4569cd59accdb15bfec8d9d70c2ec1e6085e25851fc8e2b" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.322537 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:33 crc kubenswrapper[4982]: E0224 15:16:33.323060 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="ceilometer-notification-agent" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.323073 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="ceilometer-notification-agent" Feb 24 15:16:33 crc kubenswrapper[4982]: E0224 15:16:33.323104 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="ceilometer-central-agent" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.323110 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="ceilometer-central-agent" Feb 24 15:16:33 crc kubenswrapper[4982]: E0224 15:16:33.323131 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="proxy-httpd" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.323138 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="proxy-httpd" Feb 24 15:16:33 crc kubenswrapper[4982]: E0224 15:16:33.323147 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="sg-core" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.323152 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="sg-core" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.323374 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="sg-core" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.323390 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="ceilometer-central-agent" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.323406 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="proxy-httpd" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.323424 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" containerName="ceilometer-notification-agent" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.326013 4982 scope.go:117] "RemoveContainer" containerID="ff61aad5c6ce7cd609d8035ca17027d2690a6576ebe0a88490ad955db249061f" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.330598 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.333929 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.334377 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.352639 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.481232 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-config-data\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.481689 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.481717 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-run-httpd\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.481960 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.482030 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-scripts\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.482182 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647dr\" (UniqueName: \"kubernetes.io/projected/9b5961d5-b282-40d4-899d-1075fda9f560-kube-api-access-647dr\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.482437 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-log-httpd\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.568660 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.568726 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.584261 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-scripts\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.584334 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-647dr\" (UniqueName: \"kubernetes.io/projected/9b5961d5-b282-40d4-899d-1075fda9f560-kube-api-access-647dr\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.584398 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-log-httpd\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.584454 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-config-data\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.584541 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.584560 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-run-httpd\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.584623 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.586004 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-run-httpd\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.586302 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-log-httpd\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.591262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.591653 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.592130 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-config-data\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.592391 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-scripts\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.604186 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-647dr\" (UniqueName: \"kubernetes.io/projected/9b5961d5-b282-40d4-899d-1075fda9f560-kube-api-access-647dr\") pod \"ceilometer-0\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " pod="openstack/ceilometer-0" Feb 24 15:16:33 crc kubenswrapper[4982]: I0224 15:16:33.651374 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:34 crc kubenswrapper[4982]: I0224 15:16:34.256927 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:34 crc kubenswrapper[4982]: W0224 15:16:34.265591 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b5961d5_b282_40d4_899d_1075fda9f560.slice/crio-337d80e34713eef55d986edc4c41c78fe874a8391a5ca4319b72c18624d84271 WatchSource:0}: Error finding container 337d80e34713eef55d986edc4c41c78fe874a8391a5ca4319b72c18624d84271: Status 404 returned error can't find the container with id 337d80e34713eef55d986edc4c41c78fe874a8391a5ca4319b72c18624d84271 Feb 24 15:16:34 crc kubenswrapper[4982]: I0224 15:16:34.490005 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:16:34 crc kubenswrapper[4982]: I0224 15:16:34.490448 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3cbd28d0-4f27-43d5-86cb-8fbd471d4098" containerName="kube-state-metrics" containerID="cri-o://c911608bbfccdce91ead82c6e6edd612759035ffffe737f7d1b955afcbd4bdcd" gracePeriod=30 Feb 24 15:16:34 crc kubenswrapper[4982]: I0224 15:16:34.681100 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:16:34 crc kubenswrapper[4982]: I0224 15:16:34.681782 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="32995058-5676-4cbd-9df5-92cd2ed06ff7" containerName="mysqld-exporter" containerID="cri-o://4bf2b5a99a3638b6b060e393c4d2136abced9a48ae31b556aa4d5fd10a401413" gracePeriod=30 Feb 24 15:16:35 crc kubenswrapper[4982]: I0224 15:16:35.163853 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269117fa-6bb8-4cfa-9608-a43e8fef59a9" path="/var/lib/kubelet/pods/269117fa-6bb8-4cfa-9608-a43e8fef59a9/volumes" Feb 24 15:16:35 crc kubenswrapper[4982]: I0224 15:16:35.242203 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerStarted","Data":"337d80e34713eef55d986edc4c41c78fe874a8391a5ca4319b72c18624d84271"} Feb 24 15:16:35 crc kubenswrapper[4982]: I0224 15:16:35.247873 4982 generic.go:334] "Generic (PLEG): container finished" podID="3cbd28d0-4f27-43d5-86cb-8fbd471d4098" containerID="c911608bbfccdce91ead82c6e6edd612759035ffffe737f7d1b955afcbd4bdcd" exitCode=2 Feb 24 15:16:35 crc kubenswrapper[4982]: I0224 15:16:35.247944 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3cbd28d0-4f27-43d5-86cb-8fbd471d4098","Type":"ContainerDied","Data":"c911608bbfccdce91ead82c6e6edd612759035ffffe737f7d1b955afcbd4bdcd"} Feb 24 15:16:35 crc kubenswrapper[4982]: I0224 15:16:35.251143 4982 generic.go:334] "Generic (PLEG): container finished" podID="32995058-5676-4cbd-9df5-92cd2ed06ff7" containerID="4bf2b5a99a3638b6b060e393c4d2136abced9a48ae31b556aa4d5fd10a401413" exitCode=2 Feb 24 15:16:35 crc kubenswrapper[4982]: I0224 15:16:35.251176 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"32995058-5676-4cbd-9df5-92cd2ed06ff7","Type":"ContainerDied","Data":"4bf2b5a99a3638b6b060e393c4d2136abced9a48ae31b556aa4d5fd10a401413"} Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.056157 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.169547 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlcc8\" (UniqueName: \"kubernetes.io/projected/3cbd28d0-4f27-43d5-86cb-8fbd471d4098-kube-api-access-vlcc8\") pod \"3cbd28d0-4f27-43d5-86cb-8fbd471d4098\" (UID: \"3cbd28d0-4f27-43d5-86cb-8fbd471d4098\") " Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.186115 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cbd28d0-4f27-43d5-86cb-8fbd471d4098-kube-api-access-vlcc8" (OuterVolumeSpecName: "kube-api-access-vlcc8") pod "3cbd28d0-4f27-43d5-86cb-8fbd471d4098" (UID: "3cbd28d0-4f27-43d5-86cb-8fbd471d4098"). InnerVolumeSpecName "kube-api-access-vlcc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.213078 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.271021 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.271768 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-config-data\") pod \"32995058-5676-4cbd-9df5-92cd2ed06ff7\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.271955 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3cbd28d0-4f27-43d5-86cb-8fbd471d4098","Type":"ContainerDied","Data":"3c161bbc4b1d0338ac2e7cde767c2c6324d59fcdd81ad4d0509cb5f0abb34d03"} Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.271992 4982 scope.go:117] "RemoveContainer" containerID="c911608bbfccdce91ead82c6e6edd612759035ffffe737f7d1b955afcbd4bdcd" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.271961 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-combined-ca-bundle\") pod \"32995058-5676-4cbd-9df5-92cd2ed06ff7\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.272833 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wpzx\" (UniqueName: \"kubernetes.io/projected/32995058-5676-4cbd-9df5-92cd2ed06ff7-kube-api-access-8wpzx\") pod \"32995058-5676-4cbd-9df5-92cd2ed06ff7\" (UID: \"32995058-5676-4cbd-9df5-92cd2ed06ff7\") " Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.273804 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlcc8\" (UniqueName: \"kubernetes.io/projected/3cbd28d0-4f27-43d5-86cb-8fbd471d4098-kube-api-access-vlcc8\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.278743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32995058-5676-4cbd-9df5-92cd2ed06ff7-kube-api-access-8wpzx" (OuterVolumeSpecName: "kube-api-access-8wpzx") pod "32995058-5676-4cbd-9df5-92cd2ed06ff7" (UID: "32995058-5676-4cbd-9df5-92cd2ed06ff7"). InnerVolumeSpecName "kube-api-access-8wpzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.288718 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"32995058-5676-4cbd-9df5-92cd2ed06ff7","Type":"ContainerDied","Data":"d85b99689138a456df72650d99ec67f0aa3bb650c419ae3c46bda378863d6349"} Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.288857 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.297924 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerStarted","Data":"705f5914fdf329f0ea27eb836aba84f28c53efc4b0abee84f5d306a8ff43abb7"} Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.298113 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-api" containerID="cri-o://8b4b4b11ec728b191fe67baf54fb0cb8e34e5e37007ad18bf768df8e3e304e0d" gracePeriod=30 Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.298570 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-notifier" containerID="cri-o://812d1cd686345baf4ab46b6a6e0f1acebd9b5941d9c05682c6b07ffed7e3511d" gracePeriod=30 Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.298601 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-listener" containerID="cri-o://705f5914fdf329f0ea27eb836aba84f28c53efc4b0abee84f5d306a8ff43abb7" gracePeriod=30 Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.298622 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-evaluator" containerID="cri-o://c6d509fc0b84ed176c02a5ddca1095c01e86815fa2b8db93a147d5cdf85730e5" gracePeriod=30 Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.318169 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32995058-5676-4cbd-9df5-92cd2ed06ff7" (UID: "32995058-5676-4cbd-9df5-92cd2ed06ff7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.328877 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.345668 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.356849 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.357020 4982 scope.go:117] "RemoveContainer" containerID="4bf2b5a99a3638b6b060e393c4d2136abced9a48ae31b556aa4d5fd10a401413" Feb 24 15:16:36 crc kubenswrapper[4982]: E0224 15:16:36.357598 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32995058-5676-4cbd-9df5-92cd2ed06ff7" containerName="mysqld-exporter" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.357623 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="32995058-5676-4cbd-9df5-92cd2ed06ff7" containerName="mysqld-exporter" Feb 24 15:16:36 crc kubenswrapper[4982]: E0224 15:16:36.357652 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbd28d0-4f27-43d5-86cb-8fbd471d4098" containerName="kube-state-metrics" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.357661 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbd28d0-4f27-43d5-86cb-8fbd471d4098" containerName="kube-state-metrics" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.357950 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="32995058-5676-4cbd-9df5-92cd2ed06ff7" containerName="mysqld-exporter" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.357981 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cbd28d0-4f27-43d5-86cb-8fbd471d4098" containerName="kube-state-metrics" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.358484 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.455465337 podStartE2EDuration="13.358468231s" podCreationTimestamp="2026-02-24 15:16:23 +0000 UTC" firstStartedPulling="2026-02-24 15:16:24.846288495 +0000 UTC m=+1646.465346988" lastFinishedPulling="2026-02-24 15:16:35.749291389 +0000 UTC m=+1657.368349882" observedRunningTime="2026-02-24 15:16:36.337484024 +0000 UTC m=+1657.956542537" watchObservedRunningTime="2026-02-24 15:16:36.358468231 +0000 UTC m=+1657.977526724" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.358898 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.368761 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.369018 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.382242 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wpzx\" (UniqueName: \"kubernetes.io/projected/32995058-5676-4cbd-9df5-92cd2ed06ff7-kube-api-access-8wpzx\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.382277 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.391527 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.407955 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-config-data" (OuterVolumeSpecName: "config-data") pod "32995058-5676-4cbd-9df5-92cd2ed06ff7" (UID: "32995058-5676-4cbd-9df5-92cd2ed06ff7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.484767 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.485308 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.485400 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.485427 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb22m\" (UniqueName: \"kubernetes.io/projected/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-api-access-lb22m\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.485525 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32995058-5676-4cbd-9df5-92cd2ed06ff7-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.587950 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb22m\" (UniqueName: \"kubernetes.io/projected/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-api-access-lb22m\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.588201 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.588414 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.588534 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.593164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.593187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.594612 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb94506-62fc-4912-92e6-6d37a079eba1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.611438 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb22m\" (UniqueName: \"kubernetes.io/projected/ddb94506-62fc-4912-92e6-6d37a079eba1-kube-api-access-lb22m\") pod \"kube-state-metrics-0\" (UID: \"ddb94506-62fc-4912-92e6-6d37a079eba1\") " pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.687482 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.852659 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.874922 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.914005 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.926253 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.930270 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.930426 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.941843 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.998917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-config-data\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.999048 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.999081 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:36 crc kubenswrapper[4982]: I0224 15:16:36.999116 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsr8x\" (UniqueName: \"kubernetes.io/projected/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-kube-api-access-fsr8x\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.108372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-config-data\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.110307 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.110485 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.110837 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsr8x\" (UniqueName: \"kubernetes.io/projected/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-kube-api-access-fsr8x\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.134846 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-config-data\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.138166 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.149268 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.158279 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsr8x\" (UniqueName: \"kubernetes.io/projected/920bb33f-bbf4-4a58-bfac-ce0d9eda6001-kube-api-access-fsr8x\") pod \"mysqld-exporter-0\" (UID: \"920bb33f-bbf4-4a58-bfac-ce0d9eda6001\") " pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.167280 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32995058-5676-4cbd-9df5-92cd2ed06ff7" path="/var/lib/kubelet/pods/32995058-5676-4cbd-9df5-92cd2ed06ff7/volumes" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.168812 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cbd28d0-4f27-43d5-86cb-8fbd471d4098" path="/var/lib/kubelet/pods/3cbd28d0-4f27-43d5-86cb-8fbd471d4098/volumes" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.322973 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerStarted","Data":"ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a"} Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.323254 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerStarted","Data":"36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4"} Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.331661 4982 generic.go:334] "Generic (PLEG): container finished" podID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerID="c6d509fc0b84ed176c02a5ddca1095c01e86815fa2b8db93a147d5cdf85730e5" exitCode=0 Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.331689 4982 generic.go:334] "Generic (PLEG): container finished" podID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerID="8b4b4b11ec728b191fe67baf54fb0cb8e34e5e37007ad18bf768df8e3e304e0d" exitCode=0 Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.331710 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerDied","Data":"c6d509fc0b84ed176c02a5ddca1095c01e86815fa2b8db93a147d5cdf85730e5"} Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.331733 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerDied","Data":"8b4b4b11ec728b191fe67baf54fb0cb8e34e5e37007ad18bf768df8e3e304e0d"} Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.368254 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 24 15:16:37 crc kubenswrapper[4982]: W0224 15:16:37.371028 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddb94506_62fc_4912_92e6_6d37a079eba1.slice/crio-b333bc2a1aceff3fd006e841ab64794711a86a194e5e0925014027cb395f7773 WatchSource:0}: Error finding container b333bc2a1aceff3fd006e841ab64794711a86a194e5e0925014027cb395f7773: Status 404 returned error can't find the container with id b333bc2a1aceff3fd006e841ab64794711a86a194e5e0925014027cb395f7773 Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.405434 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.738218 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:37 crc kubenswrapper[4982]: W0224 15:16:37.878003 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod920bb33f_bbf4_4a58_bfac_ce0d9eda6001.slice/crio-f21a4ec1526121dd448f76f636e9e2efbc9cb997d1bfe69e1c12ab697f4fccea WatchSource:0}: Error finding container f21a4ec1526121dd448f76f636e9e2efbc9cb997d1bfe69e1c12ab697f4fccea: Status 404 returned error can't find the container with id f21a4ec1526121dd448f76f636e9e2efbc9cb997d1bfe69e1c12ab697f4fccea Feb 24 15:16:37 crc kubenswrapper[4982]: I0224 15:16:37.891270 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 24 15:16:38 crc kubenswrapper[4982]: I0224 15:16:38.344313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"920bb33f-bbf4-4a58-bfac-ce0d9eda6001","Type":"ContainerStarted","Data":"f21a4ec1526121dd448f76f636e9e2efbc9cb997d1bfe69e1c12ab697f4fccea"} Feb 24 15:16:38 crc kubenswrapper[4982]: I0224 15:16:38.348648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerStarted","Data":"eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30"} Feb 24 15:16:38 crc kubenswrapper[4982]: I0224 15:16:38.351578 4982 generic.go:334] "Generic (PLEG): container finished" podID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerID="812d1cd686345baf4ab46b6a6e0f1acebd9b5941d9c05682c6b07ffed7e3511d" exitCode=0 Feb 24 15:16:38 crc kubenswrapper[4982]: I0224 15:16:38.351652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerDied","Data":"812d1cd686345baf4ab46b6a6e0f1acebd9b5941d9c05682c6b07ffed7e3511d"} Feb 24 15:16:38 crc kubenswrapper[4982]: I0224 15:16:38.353849 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddb94506-62fc-4912-92e6-6d37a079eba1","Type":"ContainerStarted","Data":"474c2f6c88d950a56d847f032d2a542f31c7761673e72f3149ef5542533b804c"} Feb 24 15:16:38 crc kubenswrapper[4982]: I0224 15:16:38.353894 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ddb94506-62fc-4912-92e6-6d37a079eba1","Type":"ContainerStarted","Data":"b333bc2a1aceff3fd006e841ab64794711a86a194e5e0925014027cb395f7773"} Feb 24 15:16:38 crc kubenswrapper[4982]: I0224 15:16:38.354035 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 24 15:16:38 crc kubenswrapper[4982]: I0224 15:16:38.382122 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.969296238 podStartE2EDuration="2.382104011s" podCreationTimestamp="2026-02-24 15:16:36 +0000 UTC" firstStartedPulling="2026-02-24 15:16:37.373766622 +0000 UTC m=+1658.992825115" lastFinishedPulling="2026-02-24 15:16:37.786574395 +0000 UTC m=+1659.405632888" observedRunningTime="2026-02-24 15:16:38.368737113 +0000 UTC m=+1659.987795606" watchObservedRunningTime="2026-02-24 15:16:38.382104011 +0000 UTC m=+1660.001162504" Feb 24 15:16:39 crc kubenswrapper[4982]: I0224 15:16:39.365989 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"920bb33f-bbf4-4a58-bfac-ce0d9eda6001","Type":"ContainerStarted","Data":"3406fe620d1f65934fd12ae0444c76711ef9c121129aa21a1190ce4a73c57a61"} Feb 24 15:16:39 crc kubenswrapper[4982]: I0224 15:16:39.393169 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.898539506 podStartE2EDuration="3.393146381s" podCreationTimestamp="2026-02-24 15:16:36 +0000 UTC" firstStartedPulling="2026-02-24 15:16:37.889152119 +0000 UTC m=+1659.508210612" lastFinishedPulling="2026-02-24 15:16:38.383758994 +0000 UTC m=+1660.002817487" observedRunningTime="2026-02-24 15:16:39.391056337 +0000 UTC m=+1661.010114850" watchObservedRunningTime="2026-02-24 15:16:39.393146381 +0000 UTC m=+1661.012204874" Feb 24 15:16:40 crc kubenswrapper[4982]: I0224 15:16:40.382114 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="ceilometer-central-agent" containerID="cri-o://ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a" gracePeriod=30 Feb 24 15:16:40 crc kubenswrapper[4982]: I0224 15:16:40.382837 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerStarted","Data":"676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57"} Feb 24 15:16:40 crc kubenswrapper[4982]: I0224 15:16:40.383094 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:16:40 crc kubenswrapper[4982]: I0224 15:16:40.383473 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="proxy-httpd" containerID="cri-o://676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57" gracePeriod=30 Feb 24 15:16:40 crc kubenswrapper[4982]: I0224 15:16:40.383741 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="ceilometer-notification-agent" containerID="cri-o://36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4" gracePeriod=30 Feb 24 15:16:40 crc kubenswrapper[4982]: I0224 15:16:40.383789 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="sg-core" containerID="cri-o://eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30" gracePeriod=30 Feb 24 15:16:40 crc kubenswrapper[4982]: I0224 15:16:40.480866 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.948635084 podStartE2EDuration="7.480846089s" podCreationTimestamp="2026-02-24 15:16:33 +0000 UTC" firstStartedPulling="2026-02-24 15:16:34.268409969 +0000 UTC m=+1655.887468462" lastFinishedPulling="2026-02-24 15:16:39.800620984 +0000 UTC m=+1661.419679467" observedRunningTime="2026-02-24 15:16:40.456547946 +0000 UTC m=+1662.075606449" watchObservedRunningTime="2026-02-24 15:16:40.480846089 +0000 UTC m=+1662.099904592" Feb 24 15:16:40 crc kubenswrapper[4982]: I0224 15:16:40.587150 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 24 15:16:40 crc kubenswrapper[4982]: E0224 15:16:40.943691 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b5961d5_b282_40d4_899d_1075fda9f560.slice/crio-36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b5961d5_b282_40d4_899d_1075fda9f560.slice/crio-conmon-36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.397760 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b5961d5-b282-40d4-899d-1075fda9f560" containerID="676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57" exitCode=0 Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.397799 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b5961d5-b282-40d4-899d-1075fda9f560" containerID="eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30" exitCode=2 Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.397809 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b5961d5-b282-40d4-899d-1075fda9f560" containerID="36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4" exitCode=0 Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.397830 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerDied","Data":"676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57"} Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.397863 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerDied","Data":"eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30"} Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.397875 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerDied","Data":"36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4"} Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.515714 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.516932 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.517280 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.521780 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 15:16:41 crc kubenswrapper[4982]: I0224 15:16:41.949262 4982 scope.go:117] "RemoveContainer" containerID="ea3ae698b79944d109c5af1612d6c18d27f1bfd0637464c73fc7226a9fb62aad" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.409590 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.417536 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.575486 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.576704 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.591172 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.671285 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5bfvp"] Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.673432 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.702906 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5bfvp"] Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.822133 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-config\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.822322 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.822435 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.822603 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pkn\" (UniqueName: \"kubernetes.io/projected/545325e5-ce54-4c9c-81b0-162d73c405fe-kube-api-access-r2pkn\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.822633 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.822680 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.925356 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.925800 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2pkn\" (UniqueName: \"kubernetes.io/projected/545325e5-ce54-4c9c-81b0-162d73c405fe-kube-api-access-r2pkn\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.925822 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.925866 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.925926 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-config\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.925970 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.926922 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.926976 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-config\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.927048 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.927655 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.928264 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:42 crc kubenswrapper[4982]: I0224 15:16:42.950476 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2pkn\" (UniqueName: \"kubernetes.io/projected/545325e5-ce54-4c9c-81b0-162d73c405fe-kube-api-access-r2pkn\") pod \"dnsmasq-dns-f84f9ccf-5bfvp\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.007339 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.152450 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.230883 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-scripts\") pod \"9b5961d5-b282-40d4-899d-1075fda9f560\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.230967 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-sg-core-conf-yaml\") pod \"9b5961d5-b282-40d4-899d-1075fda9f560\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.231032 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-config-data\") pod \"9b5961d5-b282-40d4-899d-1075fda9f560\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.231100 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-run-httpd\") pod \"9b5961d5-b282-40d4-899d-1075fda9f560\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.231245 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-log-httpd\") pod \"9b5961d5-b282-40d4-899d-1075fda9f560\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.231319 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-combined-ca-bundle\") pod \"9b5961d5-b282-40d4-899d-1075fda9f560\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.231404 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-647dr\" (UniqueName: \"kubernetes.io/projected/9b5961d5-b282-40d4-899d-1075fda9f560-kube-api-access-647dr\") pod \"9b5961d5-b282-40d4-899d-1075fda9f560\" (UID: \"9b5961d5-b282-40d4-899d-1075fda9f560\") " Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.232077 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9b5961d5-b282-40d4-899d-1075fda9f560" (UID: "9b5961d5-b282-40d4-899d-1075fda9f560"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.232248 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.232654 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9b5961d5-b282-40d4-899d-1075fda9f560" (UID: "9b5961d5-b282-40d4-899d-1075fda9f560"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.237952 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b5961d5-b282-40d4-899d-1075fda9f560-kube-api-access-647dr" (OuterVolumeSpecName: "kube-api-access-647dr") pod "9b5961d5-b282-40d4-899d-1075fda9f560" (UID: "9b5961d5-b282-40d4-899d-1075fda9f560"). InnerVolumeSpecName "kube-api-access-647dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.260401 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-scripts" (OuterVolumeSpecName: "scripts") pod "9b5961d5-b282-40d4-899d-1075fda9f560" (UID: "9b5961d5-b282-40d4-899d-1075fda9f560"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.329111 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9b5961d5-b282-40d4-899d-1075fda9f560" (UID: "9b5961d5-b282-40d4-899d-1075fda9f560"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.334780 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-647dr\" (UniqueName: \"kubernetes.io/projected/9b5961d5-b282-40d4-899d-1075fda9f560-kube-api-access-647dr\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.334815 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.334827 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.334837 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b5961d5-b282-40d4-899d-1075fda9f560-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.360069 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b5961d5-b282-40d4-899d-1075fda9f560" (UID: "9b5961d5-b282-40d4-899d-1075fda9f560"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.434146 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b5961d5-b282-40d4-899d-1075fda9f560" containerID="ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a" exitCode=0 Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.434216 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.434263 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerDied","Data":"ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a"} Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.434289 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b5961d5-b282-40d4-899d-1075fda9f560","Type":"ContainerDied","Data":"337d80e34713eef55d986edc4c41c78fe874a8391a5ca4319b72c18624d84271"} Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.434304 4982 scope.go:117] "RemoveContainer" containerID="676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.437302 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.455882 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.507060 4982 scope.go:117] "RemoveContainer" containerID="eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.582338 4982 scope.go:117] "RemoveContainer" containerID="36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.589145 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-config-data" (OuterVolumeSpecName: "config-data") pod "9b5961d5-b282-40d4-899d-1075fda9f560" (UID: "9b5961d5-b282-40d4-899d-1075fda9f560"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.604840 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5bfvp"] Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.646809 4982 scope.go:117] "RemoveContainer" containerID="ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.670134 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b5961d5-b282-40d4-899d-1075fda9f560-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.676393 4982 scope.go:117] "RemoveContainer" containerID="676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57" Feb 24 15:16:43 crc kubenswrapper[4982]: E0224 15:16:43.677266 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57\": container with ID starting with 676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57 not found: ID does not exist" containerID="676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.677304 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57"} err="failed to get container status \"676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57\": rpc error: code = NotFound desc = could not find container \"676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57\": container with ID starting with 676073a424701b28a2e22eb0e1e1a9ee23a17c75fb653b1f6c9604434aae2c57 not found: ID does not exist" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.677329 4982 scope.go:117] "RemoveContainer" containerID="eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30" Feb 24 15:16:43 crc kubenswrapper[4982]: E0224 15:16:43.679280 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30\": container with ID starting with eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30 not found: ID does not exist" containerID="eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.679325 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30"} err="failed to get container status \"eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30\": rpc error: code = NotFound desc = could not find container \"eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30\": container with ID starting with eeef70f707bb392034ffe602c6e1026e8483f050a18ffe24efe94fe703a58f30 not found: ID does not exist" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.679349 4982 scope.go:117] "RemoveContainer" containerID="36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4" Feb 24 15:16:43 crc kubenswrapper[4982]: E0224 15:16:43.679827 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4\": container with ID starting with 36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4 not found: ID does not exist" containerID="36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.679852 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4"} err="failed to get container status \"36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4\": rpc error: code = NotFound desc = could not find container \"36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4\": container with ID starting with 36bd0ad98f692a7a3f16d77e1b6108cc3694607fc45b176953a5d71cd630ecb4 not found: ID does not exist" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.679867 4982 scope.go:117] "RemoveContainer" containerID="ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a" Feb 24 15:16:43 crc kubenswrapper[4982]: E0224 15:16:43.680153 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a\": container with ID starting with ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a not found: ID does not exist" containerID="ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.680185 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a"} err="failed to get container status \"ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a\": rpc error: code = NotFound desc = could not find container \"ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a\": container with ID starting with ec3b64bd9d0401ab31783dac4a891e153c2d3b273d9696a749dff20704743d5a not found: ID does not exist" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.790958 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.803696 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.830641 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:43 crc kubenswrapper[4982]: E0224 15:16:43.831196 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="proxy-httpd" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.831213 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="proxy-httpd" Feb 24 15:16:43 crc kubenswrapper[4982]: E0224 15:16:43.831230 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="sg-core" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.831237 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="sg-core" Feb 24 15:16:43 crc kubenswrapper[4982]: E0224 15:16:43.831257 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="ceilometer-notification-agent" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.831263 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="ceilometer-notification-agent" Feb 24 15:16:43 crc kubenswrapper[4982]: E0224 15:16:43.831278 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="ceilometer-central-agent" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.831284 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="ceilometer-central-agent" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.831486 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="ceilometer-notification-agent" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.831522 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="ceilometer-central-agent" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.831540 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="proxy-httpd" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.831553 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" containerName="sg-core" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.833578 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.843950 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.850396 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.850647 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.855121 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.978197 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.978237 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkz7x\" (UniqueName: \"kubernetes.io/projected/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-kube-api-access-xkz7x\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.978613 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-log-httpd\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.978701 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.978845 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.978973 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-run-httpd\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.979106 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-scripts\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:43 crc kubenswrapper[4982]: I0224 15:16:43.979195 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-config-data\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.081389 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.081751 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkz7x\" (UniqueName: \"kubernetes.io/projected/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-kube-api-access-xkz7x\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.081959 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-log-httpd\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.082003 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.082067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.082086 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-run-httpd\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.082127 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-scripts\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.082154 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-config-data\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.083059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-run-httpd\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.083338 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-log-httpd\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.087275 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.088124 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.088150 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.090856 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-config-data\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.092803 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-scripts\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.104256 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkz7x\" (UniqueName: \"kubernetes.io/projected/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-kube-api-access-xkz7x\") pod \"ceilometer-0\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.167062 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.455715 4982 generic.go:334] "Generic (PLEG): container finished" podID="545325e5-ce54-4c9c-81b0-162d73c405fe" containerID="ed6394bf579785bb6c71dc944b74b58257d3c7e8882e047a25c641e3e6d0f9f3" exitCode=0 Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.457275 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" event={"ID":"545325e5-ce54-4c9c-81b0-162d73c405fe","Type":"ContainerDied","Data":"ed6394bf579785bb6c71dc944b74b58257d3c7e8882e047a25c641e3e6d0f9f3"} Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.457317 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" event={"ID":"545325e5-ce54-4c9c-81b0-162d73c405fe","Type":"ContainerStarted","Data":"383b4cd4fd559a69484d3926c858d70a0a11b46f3272c9c26557acdfd19c4de9"} Feb 24 15:16:44 crc kubenswrapper[4982]: I0224 15:16:44.666049 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:45 crc kubenswrapper[4982]: I0224 15:16:45.214849 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b5961d5-b282-40d4-899d-1075fda9f560" path="/var/lib/kubelet/pods/9b5961d5-b282-40d4-899d-1075fda9f560/volumes" Feb 24 15:16:45 crc kubenswrapper[4982]: I0224 15:16:45.470149 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerStarted","Data":"358a063b51b7687c4a5f8abefa6df0694d71a35cd88afb3f5991029fbb87e946"} Feb 24 15:16:45 crc kubenswrapper[4982]: I0224 15:16:45.473921 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" event={"ID":"545325e5-ce54-4c9c-81b0-162d73c405fe","Type":"ContainerStarted","Data":"24075c5dcf9d10a6b86e31ecf3aae503e3ca37c42404dc4f4b776407f77b55fe"} Feb 24 15:16:45 crc kubenswrapper[4982]: I0224 15:16:45.501603 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" podStartSLOduration=3.501585661 podStartE2EDuration="3.501585661s" podCreationTimestamp="2026-02-24 15:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:45.499019704 +0000 UTC m=+1667.118078197" watchObservedRunningTime="2026-02-24 15:16:45.501585661 +0000 UTC m=+1667.120644154" Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.181010 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.181479 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-log" containerID="cri-o://28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0" gracePeriod=30 Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.182006 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-api" containerID="cri-o://90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1" gracePeriod=30 Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.486923 4982 generic.go:334] "Generic (PLEG): container finished" podID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerID="28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0" exitCode=143 Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.487012 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f258a706-22fa-4b6f-9c9b-51e60b78d70a","Type":"ContainerDied","Data":"28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0"} Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.490457 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerStarted","Data":"706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6"} Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.490517 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerStarted","Data":"7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f"} Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.490823 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:46 crc kubenswrapper[4982]: I0224 15:16:46.753052 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.471431 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.510257 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerStarted","Data":"b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4"} Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.519453 4982 generic.go:334] "Generic (PLEG): container finished" podID="2abbb1b6-a124-4153-8eb5-350fbb242d28" containerID="09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d" exitCode=137 Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.520681 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.521510 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2abbb1b6-a124-4153-8eb5-350fbb242d28","Type":"ContainerDied","Data":"09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d"} Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.521558 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2abbb1b6-a124-4153-8eb5-350fbb242d28","Type":"ContainerDied","Data":"4b73520953c633316967107ab36044c6ba1afca47704a4f75548924e89ea66f0"} Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.521579 4982 scope.go:117] "RemoveContainer" containerID="09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.530887 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.561374 4982 scope.go:117] "RemoveContainer" containerID="09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d" Feb 24 15:16:47 crc kubenswrapper[4982]: E0224 15:16:47.563323 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d\": container with ID starting with 09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d not found: ID does not exist" containerID="09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.563351 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d"} err="failed to get container status \"09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d\": rpc error: code = NotFound desc = could not find container \"09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d\": container with ID starting with 09eeabe19a48d2b0761ba537f48de614cfb890b25f163fc199862802a809bb4d not found: ID does not exist" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.617297 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-combined-ca-bundle\") pod \"2abbb1b6-a124-4153-8eb5-350fbb242d28\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.617349 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28zgj\" (UniqueName: \"kubernetes.io/projected/2abbb1b6-a124-4153-8eb5-350fbb242d28-kube-api-access-28zgj\") pod \"2abbb1b6-a124-4153-8eb5-350fbb242d28\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.617700 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-config-data\") pod \"2abbb1b6-a124-4153-8eb5-350fbb242d28\" (UID: \"2abbb1b6-a124-4153-8eb5-350fbb242d28\") " Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.638548 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2abbb1b6-a124-4153-8eb5-350fbb242d28-kube-api-access-28zgj" (OuterVolumeSpecName: "kube-api-access-28zgj") pod "2abbb1b6-a124-4153-8eb5-350fbb242d28" (UID: "2abbb1b6-a124-4153-8eb5-350fbb242d28"). InnerVolumeSpecName "kube-api-access-28zgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.673706 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-config-data" (OuterVolumeSpecName: "config-data") pod "2abbb1b6-a124-4153-8eb5-350fbb242d28" (UID: "2abbb1b6-a124-4153-8eb5-350fbb242d28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.692414 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2abbb1b6-a124-4153-8eb5-350fbb242d28" (UID: "2abbb1b6-a124-4153-8eb5-350fbb242d28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.721995 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.722030 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2abbb1b6-a124-4153-8eb5-350fbb242d28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.722042 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28zgj\" (UniqueName: \"kubernetes.io/projected/2abbb1b6-a124-4153-8eb5-350fbb242d28-kube-api-access-28zgj\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.856404 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.868729 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.883083 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:47 crc kubenswrapper[4982]: E0224 15:16:47.883726 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2abbb1b6-a124-4153-8eb5-350fbb242d28" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.883752 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2abbb1b6-a124-4153-8eb5-350fbb242d28" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.884032 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2abbb1b6-a124-4153-8eb5-350fbb242d28" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.884892 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.887637 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.888711 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.888789 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 24 15:16:47 crc kubenswrapper[4982]: I0224 15:16:47.911362 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.027685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.027776 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.027858 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.028068 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4f7g\" (UniqueName: \"kubernetes.io/projected/19052f4e-3c96-4cf0-82f6-be3740f6a857-kube-api-access-f4f7g\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.028260 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.130118 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4f7g\" (UniqueName: \"kubernetes.io/projected/19052f4e-3c96-4cf0-82f6-be3740f6a857-kube-api-access-f4f7g\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.130232 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.130294 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.130329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.130399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.135649 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.135806 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.136048 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.144796 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19052f4e-3c96-4cf0-82f6-be3740f6a857-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.149364 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4f7g\" (UniqueName: \"kubernetes.io/projected/19052f4e-3c96-4cf0-82f6-be3740f6a857-kube-api-access-f4f7g\") pod \"nova-cell1-novncproxy-0\" (UID: \"19052f4e-3c96-4cf0-82f6-be3740f6a857\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.205229 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:48 crc kubenswrapper[4982]: I0224 15:16:48.737628 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.166218 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2abbb1b6-a124-4153-8eb5-350fbb242d28" path="/var/lib/kubelet/pods/2abbb1b6-a124-4153-8eb5-350fbb242d28/volumes" Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.563432 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"19052f4e-3c96-4cf0-82f6-be3740f6a857","Type":"ContainerStarted","Data":"98978b97093618328ae2e66bd640878e20e0e4848a03d54cd08e60dea07eb024"} Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.563784 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"19052f4e-3c96-4cf0-82f6-be3740f6a857","Type":"ContainerStarted","Data":"4aaef607c64d563aa3649cf89efd86e3af2d7fda61c20bfe4e2a751efe809ed9"} Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.569636 4982 generic.go:334] "Generic (PLEG): container finished" podID="c96261c8-fbcb-4f93-8b64-606352364faf" containerID="305a87c7d6b69fc2f6a05f4bd5cdfb51137d67e8451dac9329d6b2af7766413d" exitCode=137 Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.569688 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c96261c8-fbcb-4f93-8b64-606352364faf","Type":"ContainerDied","Data":"305a87c7d6b69fc2f6a05f4bd5cdfb51137d67e8451dac9329d6b2af7766413d"} Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.603381 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.603358193 podStartE2EDuration="2.603358193s" podCreationTimestamp="2026-02-24 15:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:49.593854855 +0000 UTC m=+1671.212913348" watchObservedRunningTime="2026-02-24 15:16:49.603358193 +0000 UTC m=+1671.222416686" Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.749421 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.911995 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-config-data\") pod \"c96261c8-fbcb-4f93-8b64-606352364faf\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.912095 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-combined-ca-bundle\") pod \"c96261c8-fbcb-4f93-8b64-606352364faf\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.912214 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrq48\" (UniqueName: \"kubernetes.io/projected/c96261c8-fbcb-4f93-8b64-606352364faf-kube-api-access-hrq48\") pod \"c96261c8-fbcb-4f93-8b64-606352364faf\" (UID: \"c96261c8-fbcb-4f93-8b64-606352364faf\") " Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.920662 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96261c8-fbcb-4f93-8b64-606352364faf-kube-api-access-hrq48" (OuterVolumeSpecName: "kube-api-access-hrq48") pod "c96261c8-fbcb-4f93-8b64-606352364faf" (UID: "c96261c8-fbcb-4f93-8b64-606352364faf"). InnerVolumeSpecName "kube-api-access-hrq48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.976747 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-config-data" (OuterVolumeSpecName: "config-data") pod "c96261c8-fbcb-4f93-8b64-606352364faf" (UID: "c96261c8-fbcb-4f93-8b64-606352364faf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:49 crc kubenswrapper[4982]: I0224 15:16:49.977592 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c96261c8-fbcb-4f93-8b64-606352364faf" (UID: "c96261c8-fbcb-4f93-8b64-606352364faf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.015310 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrq48\" (UniqueName: \"kubernetes.io/projected/c96261c8-fbcb-4f93-8b64-606352364faf-kube-api-access-hrq48\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.015354 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.015366 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96261c8-fbcb-4f93-8b64-606352364faf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.257956 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.436937 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f258a706-22fa-4b6f-9c9b-51e60b78d70a-logs\") pod \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.437090 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-combined-ca-bundle\") pod \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.437124 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-config-data\") pod \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.437155 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6qnq\" (UniqueName: \"kubernetes.io/projected/f258a706-22fa-4b6f-9c9b-51e60b78d70a-kube-api-access-w6qnq\") pod \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\" (UID: \"f258a706-22fa-4b6f-9c9b-51e60b78d70a\") " Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.439990 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f258a706-22fa-4b6f-9c9b-51e60b78d70a-logs" (OuterVolumeSpecName: "logs") pod "f258a706-22fa-4b6f-9c9b-51e60b78d70a" (UID: "f258a706-22fa-4b6f-9c9b-51e60b78d70a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.460737 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f258a706-22fa-4b6f-9c9b-51e60b78d70a-kube-api-access-w6qnq" (OuterVolumeSpecName: "kube-api-access-w6qnq") pod "f258a706-22fa-4b6f-9c9b-51e60b78d70a" (UID: "f258a706-22fa-4b6f-9c9b-51e60b78d70a"). InnerVolumeSpecName "kube-api-access-w6qnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.498875 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f258a706-22fa-4b6f-9c9b-51e60b78d70a" (UID: "f258a706-22fa-4b6f-9c9b-51e60b78d70a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.531122 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-config-data" (OuterVolumeSpecName: "config-data") pod "f258a706-22fa-4b6f-9c9b-51e60b78d70a" (UID: "f258a706-22fa-4b6f-9c9b-51e60b78d70a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.540743 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.540781 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f258a706-22fa-4b6f-9c9b-51e60b78d70a-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.540794 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6qnq\" (UniqueName: \"kubernetes.io/projected/f258a706-22fa-4b6f-9c9b-51e60b78d70a-kube-api-access-w6qnq\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.540808 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f258a706-22fa-4b6f-9c9b-51e60b78d70a-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.617692 4982 generic.go:334] "Generic (PLEG): container finished" podID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerID="90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1" exitCode=0 Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.617749 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f258a706-22fa-4b6f-9c9b-51e60b78d70a","Type":"ContainerDied","Data":"90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1"} Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.617775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f258a706-22fa-4b6f-9c9b-51e60b78d70a","Type":"ContainerDied","Data":"c657341f56b09c4ae7d6a07bd3bd3748e478c4c20c4de4601dc379bda054ef1b"} Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.617790 4982 scope.go:117] "RemoveContainer" containerID="90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.617909 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.673691 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerStarted","Data":"7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270"} Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.673893 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="ceilometer-central-agent" containerID="cri-o://706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6" gracePeriod=30 Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.673963 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.674003 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="proxy-httpd" containerID="cri-o://7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270" gracePeriod=30 Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.674039 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="sg-core" containerID="cri-o://b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4" gracePeriod=30 Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.674071 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="ceilometer-notification-agent" containerID="cri-o://7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f" gracePeriod=30 Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.694202 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.694426 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c96261c8-fbcb-4f93-8b64-606352364faf","Type":"ContainerDied","Data":"d91646cf787b6d8856a3fb7d4cdf03fc93aeec2754de7e3870bcdc9267184ee5"} Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.709904 4982 scope.go:117] "RemoveContainer" containerID="28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.711431 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.744149 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.755415 4982 scope.go:117] "RemoveContainer" containerID="90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1" Feb 24 15:16:50 crc kubenswrapper[4982]: E0224 15:16:50.757075 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1\": container with ID starting with 90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1 not found: ID does not exist" containerID="90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.757115 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1"} err="failed to get container status \"90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1\": rpc error: code = NotFound desc = could not find container \"90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1\": container with ID starting with 90ede5a0493f0267e48cf34a6bc520d35a4966106063007e6d8f8181b4fd77c1 not found: ID does not exist" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.757138 4982 scope.go:117] "RemoveContainer" containerID="28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0" Feb 24 15:16:50 crc kubenswrapper[4982]: E0224 15:16:50.757537 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0\": container with ID starting with 28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0 not found: ID does not exist" containerID="28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.757559 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0"} err="failed to get container status \"28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0\": rpc error: code = NotFound desc = could not find container \"28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0\": container with ID starting with 28f4309bc235606a14ea0ae55d834ed49ab1d50611be1432bd65d61bcdc049f0 not found: ID does not exist" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.757574 4982 scope.go:117] "RemoveContainer" containerID="305a87c7d6b69fc2f6a05f4bd5cdfb51137d67e8451dac9329d6b2af7766413d" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.765534 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:50 crc kubenswrapper[4982]: E0224 15:16:50.766085 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-api" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.766113 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-api" Feb 24 15:16:50 crc kubenswrapper[4982]: E0224 15:16:50.766140 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-log" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.766146 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-log" Feb 24 15:16:50 crc kubenswrapper[4982]: E0224 15:16:50.766176 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96261c8-fbcb-4f93-8b64-606352364faf" containerName="nova-scheduler-scheduler" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.766183 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96261c8-fbcb-4f93-8b64-606352364faf" containerName="nova-scheduler-scheduler" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.766376 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-log" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.766391 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96261c8-fbcb-4f93-8b64-606352364faf" containerName="nova-scheduler-scheduler" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.766402 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" containerName="nova-api-api" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.767690 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.771041 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.772000 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.789385 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.800426 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.087361964 podStartE2EDuration="7.800406933s" podCreationTimestamp="2026-02-24 15:16:43 +0000 UTC" firstStartedPulling="2026-02-24 15:16:44.671990752 +0000 UTC m=+1666.291049245" lastFinishedPulling="2026-02-24 15:16:49.385035721 +0000 UTC m=+1671.004094214" observedRunningTime="2026-02-24 15:16:50.716116324 +0000 UTC m=+1672.335174817" watchObservedRunningTime="2026-02-24 15:16:50.800406933 +0000 UTC m=+1672.419465426" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.834776 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.854199 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.854247 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.854287 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7f2184-6692-4763-941a-26043475f64b-logs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.854324 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-config-data\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.854350 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.854371 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqnd\" (UniqueName: \"kubernetes.io/projected/ba7f2184-6692-4763-941a-26043475f64b-kube-api-access-xxqnd\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.856937 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.873476 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.885399 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.888935 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.891730 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.909831 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.986819 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-config-data\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.987014 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.987116 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.987283 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.987594 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7f2184-6692-4763-941a-26043475f64b-logs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.987673 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-config-data\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.987732 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.987769 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqnd\" (UniqueName: \"kubernetes.io/projected/ba7f2184-6692-4763-941a-26043475f64b-kube-api-access-xxqnd\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.987886 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rjb\" (UniqueName: \"kubernetes.io/projected/09c9766c-cdcd-422a-968e-bab2d1e067d6-kube-api-access-24rjb\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.988532 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7f2184-6692-4763-941a-26043475f64b-logs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.991749 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.993485 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-config-data\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:50 crc kubenswrapper[4982]: I0224 15:16:50.994133 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.001328 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-public-tls-certs\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.005615 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqnd\" (UniqueName: \"kubernetes.io/projected/ba7f2184-6692-4763-941a-26043475f64b-kube-api-access-xxqnd\") pod \"nova-api-0\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " pod="openstack/nova-api-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.094177 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-config-data\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.094256 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.094431 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rjb\" (UniqueName: \"kubernetes.io/projected/09c9766c-cdcd-422a-968e-bab2d1e067d6-kube-api-access-24rjb\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.099418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.104072 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-config-data\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.110566 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.119426 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rjb\" (UniqueName: \"kubernetes.io/projected/09c9766c-cdcd-422a-968e-bab2d1e067d6-kube-api-access-24rjb\") pod \"nova-scheduler-0\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " pod="openstack/nova-scheduler-0" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.165169 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96261c8-fbcb-4f93-8b64-606352364faf" path="/var/lib/kubelet/pods/c96261c8-fbcb-4f93-8b64-606352364faf/volumes" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.165798 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f258a706-22fa-4b6f-9c9b-51e60b78d70a" path="/var/lib/kubelet/pods/f258a706-22fa-4b6f-9c9b-51e60b78d70a/volumes" Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.206896 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:16:51 crc kubenswrapper[4982]: W0224 15:16:51.703019 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7f2184_6692_4763_941a_26043475f64b.slice/crio-39fb3ce955161f2aaf5a6b907cc958b4a4627ff4de4745db12ab91cceebb5942 WatchSource:0}: Error finding container 39fb3ce955161f2aaf5a6b907cc958b4a4627ff4de4745db12ab91cceebb5942: Status 404 returned error can't find the container with id 39fb3ce955161f2aaf5a6b907cc958b4a4627ff4de4745db12ab91cceebb5942 Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.708485 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.719030 4982 generic.go:334] "Generic (PLEG): container finished" podID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerID="7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270" exitCode=0 Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.719061 4982 generic.go:334] "Generic (PLEG): container finished" podID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerID="b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4" exitCode=2 Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.719069 4982 generic.go:334] "Generic (PLEG): container finished" podID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerID="7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f" exitCode=0 Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.719107 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerDied","Data":"7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270"} Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.719131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerDied","Data":"b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4"} Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.719140 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerDied","Data":"7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f"} Feb 24 15:16:51 crc kubenswrapper[4982]: I0224 15:16:51.888998 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:16:52 crc kubenswrapper[4982]: I0224 15:16:52.734962 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7f2184-6692-4763-941a-26043475f64b","Type":"ContainerStarted","Data":"a947e931d55f05d781efbb296622a1da5b99fc16d2ab470d32d9e8b049a01729"} Feb 24 15:16:52 crc kubenswrapper[4982]: I0224 15:16:52.735299 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7f2184-6692-4763-941a-26043475f64b","Type":"ContainerStarted","Data":"c4e8587421a03f825e0e005f921d1a2211fdb392807507442c2804533898b227"} Feb 24 15:16:52 crc kubenswrapper[4982]: I0224 15:16:52.735315 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7f2184-6692-4763-941a-26043475f64b","Type":"ContainerStarted","Data":"39fb3ce955161f2aaf5a6b907cc958b4a4627ff4de4745db12ab91cceebb5942"} Feb 24 15:16:52 crc kubenswrapper[4982]: I0224 15:16:52.737584 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09c9766c-cdcd-422a-968e-bab2d1e067d6","Type":"ContainerStarted","Data":"8abf25f75c60bd84462d1a0fdfedb71834204e717b1e235ba4196b1d3bf6dd9d"} Feb 24 15:16:52 crc kubenswrapper[4982]: I0224 15:16:52.737626 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09c9766c-cdcd-422a-968e-bab2d1e067d6","Type":"ContainerStarted","Data":"34860bcaec00356b9f8825919efbccdb704540181a11966220913d371474a47f"} Feb 24 15:16:52 crc kubenswrapper[4982]: I0224 15:16:52.763945 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.763924516 podStartE2EDuration="2.763924516s" podCreationTimestamp="2026-02-24 15:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:52.756714928 +0000 UTC m=+1674.375773421" watchObservedRunningTime="2026-02-24 15:16:52.763924516 +0000 UTC m=+1674.382983009" Feb 24 15:16:52 crc kubenswrapper[4982]: I0224 15:16:52.787470 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.787453279 podStartE2EDuration="2.787453279s" podCreationTimestamp="2026-02-24 15:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:16:52.775671581 +0000 UTC m=+1674.394730074" watchObservedRunningTime="2026-02-24 15:16:52.787453279 +0000 UTC m=+1674.406511772" Feb 24 15:16:53 crc kubenswrapper[4982]: I0224 15:16:53.009259 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:16:53 crc kubenswrapper[4982]: I0224 15:16:53.073653 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-8twsb"] Feb 24 15:16:53 crc kubenswrapper[4982]: I0224 15:16:53.074171 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" podUID="944cf8b8-faab-4ad6-aae7-407dd871c312" containerName="dnsmasq-dns" containerID="cri-o://6222cd0265a7ba041bb2a07a2358f086425b3e602fd86b46368de0fdb860bc92" gracePeriod=10 Feb 24 15:16:53 crc kubenswrapper[4982]: I0224 15:16:53.205446 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:53 crc kubenswrapper[4982]: I0224 15:16:53.751900 4982 generic.go:334] "Generic (PLEG): container finished" podID="944cf8b8-faab-4ad6-aae7-407dd871c312" containerID="6222cd0265a7ba041bb2a07a2358f086425b3e602fd86b46368de0fdb860bc92" exitCode=0 Feb 24 15:16:53 crc kubenswrapper[4982]: I0224 15:16:53.752995 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" event={"ID":"944cf8b8-faab-4ad6-aae7-407dd871c312","Type":"ContainerDied","Data":"6222cd0265a7ba041bb2a07a2358f086425b3e602fd86b46368de0fdb860bc92"} Feb 24 15:16:53 crc kubenswrapper[4982]: I0224 15:16:53.916913 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.073828 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-svc\") pod \"944cf8b8-faab-4ad6-aae7-407dd871c312\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.074209 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z9ws\" (UniqueName: \"kubernetes.io/projected/944cf8b8-faab-4ad6-aae7-407dd871c312-kube-api-access-5z9ws\") pod \"944cf8b8-faab-4ad6-aae7-407dd871c312\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.074264 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-swift-storage-0\") pod \"944cf8b8-faab-4ad6-aae7-407dd871c312\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.074302 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-nb\") pod \"944cf8b8-faab-4ad6-aae7-407dd871c312\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.074466 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-sb\") pod \"944cf8b8-faab-4ad6-aae7-407dd871c312\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.074656 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-config\") pod \"944cf8b8-faab-4ad6-aae7-407dd871c312\" (UID: \"944cf8b8-faab-4ad6-aae7-407dd871c312\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.091807 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944cf8b8-faab-4ad6-aae7-407dd871c312-kube-api-access-5z9ws" (OuterVolumeSpecName: "kube-api-access-5z9ws") pod "944cf8b8-faab-4ad6-aae7-407dd871c312" (UID: "944cf8b8-faab-4ad6-aae7-407dd871c312"). InnerVolumeSpecName "kube-api-access-5z9ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.139851 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "944cf8b8-faab-4ad6-aae7-407dd871c312" (UID: "944cf8b8-faab-4ad6-aae7-407dd871c312"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.151801 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-config" (OuterVolumeSpecName: "config") pod "944cf8b8-faab-4ad6-aae7-407dd871c312" (UID: "944cf8b8-faab-4ad6-aae7-407dd871c312"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.167813 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "944cf8b8-faab-4ad6-aae7-407dd871c312" (UID: "944cf8b8-faab-4ad6-aae7-407dd871c312"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.178229 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.178267 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.178288 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z9ws\" (UniqueName: \"kubernetes.io/projected/944cf8b8-faab-4ad6-aae7-407dd871c312-kube-api-access-5z9ws\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.178301 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.201807 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "944cf8b8-faab-4ad6-aae7-407dd871c312" (UID: "944cf8b8-faab-4ad6-aae7-407dd871c312"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.227604 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "944cf8b8-faab-4ad6-aae7-407dd871c312" (UID: "944cf8b8-faab-4ad6-aae7-407dd871c312"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.281556 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.281598 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/944cf8b8-faab-4ad6-aae7-407dd871c312-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.624186 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.768725 4982 generic.go:334] "Generic (PLEG): container finished" podID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerID="706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6" exitCode=0 Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.768820 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.769114 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerDied","Data":"706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6"} Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.769231 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe","Type":"ContainerDied","Data":"358a063b51b7687c4a5f8abefa6df0694d71a35cd88afb3f5991029fbb87e946"} Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.769340 4982 scope.go:117] "RemoveContainer" containerID="7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.773926 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" event={"ID":"944cf8b8-faab-4ad6-aae7-407dd871c312","Type":"ContainerDied","Data":"cfa0046345b7449648a99e36a44b1728f813f923f7a39c204a07aee2d5a4f01f"} Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.773993 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-8twsb" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.795391 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkz7x\" (UniqueName: \"kubernetes.io/projected/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-kube-api-access-xkz7x\") pod \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.795542 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-combined-ca-bundle\") pod \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.795589 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-ceilometer-tls-certs\") pod \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.795648 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-sg-core-conf-yaml\") pod \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.795712 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-log-httpd\") pod \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.795756 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-config-data\") pod \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.795857 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-scripts\") pod \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.795919 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-run-httpd\") pod \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\" (UID: \"c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe\") " Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.796860 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" (UID: "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.803409 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" (UID: "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.807699 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-kube-api-access-xkz7x" (OuterVolumeSpecName: "kube-api-access-xkz7x") pod "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" (UID: "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe"). InnerVolumeSpecName "kube-api-access-xkz7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.834643 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-scripts" (OuterVolumeSpecName: "scripts") pod "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" (UID: "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.838062 4982 scope.go:117] "RemoveContainer" containerID="b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.844559 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-8twsb"] Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.869950 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-8twsb"] Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.894729 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" (UID: "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.897423 4982 scope.go:117] "RemoveContainer" containerID="7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.898878 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.898897 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.898907 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.898917 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkz7x\" (UniqueName: \"kubernetes.io/projected/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-kube-api-access-xkz7x\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.898926 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.900749 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" (UID: "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.913979 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" (UID: "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.927025 4982 scope.go:117] "RemoveContainer" containerID="706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.956811 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-config-data" (OuterVolumeSpecName: "config-data") pod "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" (UID: "c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.958810 4982 scope.go:117] "RemoveContainer" containerID="7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270" Feb 24 15:16:54 crc kubenswrapper[4982]: E0224 15:16:54.959196 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270\": container with ID starting with 7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270 not found: ID does not exist" containerID="7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.959226 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270"} err="failed to get container status \"7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270\": rpc error: code = NotFound desc = could not find container \"7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270\": container with ID starting with 7419ba85f2f6d7e420f66c778ba20fff73688f5014c472f5de923f61f4b22270 not found: ID does not exist" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.959247 4982 scope.go:117] "RemoveContainer" containerID="b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4" Feb 24 15:16:54 crc kubenswrapper[4982]: E0224 15:16:54.959657 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4\": container with ID starting with b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4 not found: ID does not exist" containerID="b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.959705 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4"} err="failed to get container status \"b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4\": rpc error: code = NotFound desc = could not find container \"b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4\": container with ID starting with b3728ba0cae8d9c94d8fef6260ecf462c4e1ac8e912d82172bbdcb64385c67a4 not found: ID does not exist" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.959735 4982 scope.go:117] "RemoveContainer" containerID="7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f" Feb 24 15:16:54 crc kubenswrapper[4982]: E0224 15:16:54.959996 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f\": container with ID starting with 7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f not found: ID does not exist" containerID="7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.960019 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f"} err="failed to get container status \"7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f\": rpc error: code = NotFound desc = could not find container \"7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f\": container with ID starting with 7cef672f8f03444d030d5ba56ebef51f5c3fae15681854a4ff1b6a7a7a8eca9f not found: ID does not exist" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.960037 4982 scope.go:117] "RemoveContainer" containerID="706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6" Feb 24 15:16:54 crc kubenswrapper[4982]: E0224 15:16:54.960210 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6\": container with ID starting with 706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6 not found: ID does not exist" containerID="706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.960233 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6"} err="failed to get container status \"706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6\": rpc error: code = NotFound desc = could not find container \"706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6\": container with ID starting with 706e86750e174a633b2506acece2337eb49862f782629ff2055e88c683eb86b6 not found: ID does not exist" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.960271 4982 scope.go:117] "RemoveContainer" containerID="6222cd0265a7ba041bb2a07a2358f086425b3e602fd86b46368de0fdb860bc92" Feb 24 15:16:54 crc kubenswrapper[4982]: I0224 15:16:54.982403 4982 scope.go:117] "RemoveContainer" containerID="2251205940f10cb6229112e1f64b2266ba8d488ad8a8a32499643d431493c8e0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.001369 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.001397 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.001406 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.110304 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.125175 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.144267 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:55 crc kubenswrapper[4982]: E0224 15:16:55.145381 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944cf8b8-faab-4ad6-aae7-407dd871c312" containerName="init" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145402 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="944cf8b8-faab-4ad6-aae7-407dd871c312" containerName="init" Feb 24 15:16:55 crc kubenswrapper[4982]: E0224 15:16:55.145422 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944cf8b8-faab-4ad6-aae7-407dd871c312" containerName="dnsmasq-dns" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145429 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="944cf8b8-faab-4ad6-aae7-407dd871c312" containerName="dnsmasq-dns" Feb 24 15:16:55 crc kubenswrapper[4982]: E0224 15:16:55.145452 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="ceilometer-notification-agent" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145461 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="ceilometer-notification-agent" Feb 24 15:16:55 crc kubenswrapper[4982]: E0224 15:16:55.145498 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="proxy-httpd" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145509 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="proxy-httpd" Feb 24 15:16:55 crc kubenswrapper[4982]: E0224 15:16:55.145540 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="sg-core" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145548 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="sg-core" Feb 24 15:16:55 crc kubenswrapper[4982]: E0224 15:16:55.145568 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="ceilometer-central-agent" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145576 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="ceilometer-central-agent" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145811 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="ceilometer-central-agent" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145830 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="proxy-httpd" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145838 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="944cf8b8-faab-4ad6-aae7-407dd871c312" containerName="dnsmasq-dns" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145847 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="sg-core" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.145860 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" containerName="ceilometer-notification-agent" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.150756 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.153832 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.153923 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.157448 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.164687 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944cf8b8-faab-4ad6-aae7-407dd871c312" path="/var/lib/kubelet/pods/944cf8b8-faab-4ad6-aae7-407dd871c312/volumes" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.165375 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe" path="/var/lib/kubelet/pods/c02bc5ba-4188-4ba6-a4a1-3f3f84f61bbe/volumes" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.166371 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.310232 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-run-httpd\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.310897 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.310968 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-scripts\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.311000 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.311115 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-log-httpd\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.311159 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-config-data\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.311182 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhff\" (UniqueName: \"kubernetes.io/projected/16d339cd-3a99-489f-8d51-f65bbef63ab8-kube-api-access-tzhff\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.311285 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.413292 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-run-httpd\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.413446 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.413490 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-scripts\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.413541 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.413610 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-log-httpd\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.413710 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-run-httpd\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.413966 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-log-httpd\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.413641 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-config-data\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.414046 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhff\" (UniqueName: \"kubernetes.io/projected/16d339cd-3a99-489f-8d51-f65bbef63ab8-kube-api-access-tzhff\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.414082 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.418680 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-config-data\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.419408 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.422206 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.422526 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-scripts\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.422605 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.434946 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhff\" (UniqueName: \"kubernetes.io/projected/16d339cd-3a99-489f-8d51-f65bbef63ab8-kube-api-access-tzhff\") pod \"ceilometer-0\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " pod="openstack/ceilometer-0" Feb 24 15:16:55 crc kubenswrapper[4982]: I0224 15:16:55.493098 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:16:56 crc kubenswrapper[4982]: I0224 15:16:56.022568 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:16:56 crc kubenswrapper[4982]: W0224 15:16:56.025716 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16d339cd_3a99_489f_8d51_f65bbef63ab8.slice/crio-c50033188cbd48ef4bec2f8f9c22a6ed8f2f7a4d8dc6ea44135ecb617cff6046 WatchSource:0}: Error finding container c50033188cbd48ef4bec2f8f9c22a6ed8f2f7a4d8dc6ea44135ecb617cff6046: Status 404 returned error can't find the container with id c50033188cbd48ef4bec2f8f9c22a6ed8f2f7a4d8dc6ea44135ecb617cff6046 Feb 24 15:16:56 crc kubenswrapper[4982]: I0224 15:16:56.207145 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 15:16:56 crc kubenswrapper[4982]: I0224 15:16:56.809165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerStarted","Data":"c50033188cbd48ef4bec2f8f9c22a6ed8f2f7a4d8dc6ea44135ecb617cff6046"} Feb 24 15:16:57 crc kubenswrapper[4982]: I0224 15:16:57.823992 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerStarted","Data":"deeec8978ce281c35598f8f923d6f772e95e22bea0e0321162e8323072d08281"} Feb 24 15:16:57 crc kubenswrapper[4982]: I0224 15:16:57.824414 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerStarted","Data":"32f033e5e092bc578edb80cb647aab3897acc283c62cbd7fd52c35570f278ad5"} Feb 24 15:16:58 crc kubenswrapper[4982]: I0224 15:16:58.205706 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:58 crc kubenswrapper[4982]: I0224 15:16:58.239684 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:58 crc kubenswrapper[4982]: I0224 15:16:58.845400 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerStarted","Data":"e2459aabbda368f05e0b6e092503bd0bef55a379d7b38944a6f33f48a70515b5"} Feb 24 15:16:58 crc kubenswrapper[4982]: I0224 15:16:58.861242 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.042264 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-grsdc"] Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.044121 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.053742 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-grsdc"] Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.054459 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.054682 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.137392 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2zsz\" (UniqueName: \"kubernetes.io/projected/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-kube-api-access-c2zsz\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.137457 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-scripts\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.137812 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-config-data\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.137945 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.239859 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-config-data\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.241011 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.241576 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2zsz\" (UniqueName: \"kubernetes.io/projected/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-kube-api-access-c2zsz\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.241622 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-scripts\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.242424 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.243310 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.246134 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.254615 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-config-data\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.257573 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-scripts\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.258110 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2zsz\" (UniqueName: \"kubernetes.io/projected/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-kube-api-access-c2zsz\") pod \"nova-cell1-cell-mapping-grsdc\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.378645 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:16:59 crc kubenswrapper[4982]: I0224 15:16:59.922688 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-grsdc"] Feb 24 15:16:59 crc kubenswrapper[4982]: W0224 15:16:59.927433 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd91be378_9ad1_4685_b2b2_3f96f2f6fc88.slice/crio-2f98e1805f0abfba1f2969a3207457250fe6f31f9b426d8e2a75e4ec772484c6 WatchSource:0}: Error finding container 2f98e1805f0abfba1f2969a3207457250fe6f31f9b426d8e2a75e4ec772484c6: Status 404 returned error can't find the container with id 2f98e1805f0abfba1f2969a3207457250fe6f31f9b426d8e2a75e4ec772484c6 Feb 24 15:17:00 crc kubenswrapper[4982]: I0224 15:17:00.887226 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-grsdc" event={"ID":"d91be378-9ad1-4685-b2b2-3f96f2f6fc88","Type":"ContainerStarted","Data":"3249a603e0f5078d12cb824b322cbea3865cacff30dd643dc621f627d1de372c"} Feb 24 15:17:00 crc kubenswrapper[4982]: I0224 15:17:00.887807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-grsdc" event={"ID":"d91be378-9ad1-4685-b2b2-3f96f2f6fc88","Type":"ContainerStarted","Data":"2f98e1805f0abfba1f2969a3207457250fe6f31f9b426d8e2a75e4ec772484c6"} Feb 24 15:17:00 crc kubenswrapper[4982]: I0224 15:17:00.892080 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerStarted","Data":"4714106cede0199a40fc90b43537e3bfcf5dd6e30a0aaa5a43abbda0584227fb"} Feb 24 15:17:00 crc kubenswrapper[4982]: I0224 15:17:00.892355 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:17:00 crc kubenswrapper[4982]: I0224 15:17:00.912408 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-grsdc" podStartSLOduration=1.9123860929999998 podStartE2EDuration="1.912386093s" podCreationTimestamp="2026-02-24 15:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:17:00.900526884 +0000 UTC m=+1682.519585397" watchObservedRunningTime="2026-02-24 15:17:00.912386093 +0000 UTC m=+1682.531444586" Feb 24 15:17:00 crc kubenswrapper[4982]: I0224 15:17:00.928006 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.946053434 podStartE2EDuration="5.92798582s" podCreationTimestamp="2026-02-24 15:16:55 +0000 UTC" firstStartedPulling="2026-02-24 15:16:56.028401658 +0000 UTC m=+1677.647460151" lastFinishedPulling="2026-02-24 15:17:00.010334044 +0000 UTC m=+1681.629392537" observedRunningTime="2026-02-24 15:17:00.918966104 +0000 UTC m=+1682.538024597" watchObservedRunningTime="2026-02-24 15:17:00.92798582 +0000 UTC m=+1682.547044313" Feb 24 15:17:01 crc kubenswrapper[4982]: I0224 15:17:01.111748 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 15:17:01 crc kubenswrapper[4982]: I0224 15:17:01.111785 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 15:17:01 crc kubenswrapper[4982]: I0224 15:17:01.207692 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 24 15:17:01 crc kubenswrapper[4982]: I0224 15:17:01.245425 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 24 15:17:01 crc kubenswrapper[4982]: I0224 15:17:01.953565 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 24 15:17:02 crc kubenswrapper[4982]: I0224 15:17:02.125688 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.10:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 15:17:02 crc kubenswrapper[4982]: I0224 15:17:02.125952 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.10:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 15:17:05 crc kubenswrapper[4982]: I0224 15:17:05.955417 4982 generic.go:334] "Generic (PLEG): container finished" podID="d91be378-9ad1-4685-b2b2-3f96f2f6fc88" containerID="3249a603e0f5078d12cb824b322cbea3865cacff30dd643dc621f627d1de372c" exitCode=0 Feb 24 15:17:05 crc kubenswrapper[4982]: I0224 15:17:05.955550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-grsdc" event={"ID":"d91be378-9ad1-4685-b2b2-3f96f2f6fc88","Type":"ContainerDied","Data":"3249a603e0f5078d12cb824b322cbea3865cacff30dd643dc621f627d1de372c"} Feb 24 15:17:06 crc kubenswrapper[4982]: I0224 15:17:06.974321 4982 generic.go:334] "Generic (PLEG): container finished" podID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerID="705f5914fdf329f0ea27eb836aba84f28c53efc4b0abee84f5d306a8ff43abb7" exitCode=137 Feb 24 15:17:06 crc kubenswrapper[4982]: I0224 15:17:06.974534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerDied","Data":"705f5914fdf329f0ea27eb836aba84f28c53efc4b0abee84f5d306a8ff43abb7"} Feb 24 15:17:06 crc kubenswrapper[4982]: I0224 15:17:06.974751 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b41363a0-fbb1-45bc-8d05-1be8b4a10dac","Type":"ContainerDied","Data":"fd7cb0fbd50da4c280dfc2b1bcca86a5b16d298035da1be5bf8f43230d8ce0fb"} Feb 24 15:17:06 crc kubenswrapper[4982]: I0224 15:17:06.974765 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd7cb0fbd50da4c280dfc2b1bcca86a5b16d298035da1be5bf8f43230d8ce0fb" Feb 24 15:17:06 crc kubenswrapper[4982]: I0224 15:17:06.976198 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.070143 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd5c2\" (UniqueName: \"kubernetes.io/projected/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-kube-api-access-cd5c2\") pod \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.070274 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-config-data\") pod \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.070310 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-combined-ca-bundle\") pod \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.070711 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-scripts\") pod \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\" (UID: \"b41363a0-fbb1-45bc-8d05-1be8b4a10dac\") " Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.082710 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-scripts" (OuterVolumeSpecName: "scripts") pod "b41363a0-fbb1-45bc-8d05-1be8b4a10dac" (UID: "b41363a0-fbb1-45bc-8d05-1be8b4a10dac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.082745 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-kube-api-access-cd5c2" (OuterVolumeSpecName: "kube-api-access-cd5c2") pod "b41363a0-fbb1-45bc-8d05-1be8b4a10dac" (UID: "b41363a0-fbb1-45bc-8d05-1be8b4a10dac"). InnerVolumeSpecName "kube-api-access-cd5c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.173884 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.173917 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd5c2\" (UniqueName: \"kubernetes.io/projected/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-kube-api-access-cd5c2\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.221429 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b41363a0-fbb1-45bc-8d05-1be8b4a10dac" (UID: "b41363a0-fbb1-45bc-8d05-1be8b4a10dac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.226409 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-config-data" (OuterVolumeSpecName: "config-data") pod "b41363a0-fbb1-45bc-8d05-1be8b4a10dac" (UID: "b41363a0-fbb1-45bc-8d05-1be8b4a10dac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.277921 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.277966 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41363a0-fbb1-45bc-8d05-1be8b4a10dac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:07 crc kubenswrapper[4982]: E0224 15:17:07.320395 4982 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.593711 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.687289 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-combined-ca-bundle\") pod \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.687392 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-config-data\") pod \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.687567 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-scripts\") pod \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.687781 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2zsz\" (UniqueName: \"kubernetes.io/projected/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-kube-api-access-c2zsz\") pod \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\" (UID: \"d91be378-9ad1-4685-b2b2-3f96f2f6fc88\") " Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.693723 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-scripts" (OuterVolumeSpecName: "scripts") pod "d91be378-9ad1-4685-b2b2-3f96f2f6fc88" (UID: "d91be378-9ad1-4685-b2b2-3f96f2f6fc88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.694269 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-kube-api-access-c2zsz" (OuterVolumeSpecName: "kube-api-access-c2zsz") pod "d91be378-9ad1-4685-b2b2-3f96f2f6fc88" (UID: "d91be378-9ad1-4685-b2b2-3f96f2f6fc88"). InnerVolumeSpecName "kube-api-access-c2zsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.720557 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d91be378-9ad1-4685-b2b2-3f96f2f6fc88" (UID: "d91be378-9ad1-4685-b2b2-3f96f2f6fc88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.723597 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-config-data" (OuterVolumeSpecName: "config-data") pod "d91be378-9ad1-4685-b2b2-3f96f2f6fc88" (UID: "d91be378-9ad1-4685-b2b2-3f96f2f6fc88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.790952 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.790988 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.790997 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.791006 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2zsz\" (UniqueName: \"kubernetes.io/projected/d91be378-9ad1-4685-b2b2-3f96f2f6fc88-kube-api-access-c2zsz\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.985665 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-grsdc" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.985654 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-grsdc" event={"ID":"d91be378-9ad1-4685-b2b2-3f96f2f6fc88","Type":"ContainerDied","Data":"2f98e1805f0abfba1f2969a3207457250fe6f31f9b426d8e2a75e4ec772484c6"} Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.986021 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f98e1805f0abfba1f2969a3207457250fe6f31f9b426d8e2a75e4ec772484c6" Feb 24 15:17:07 crc kubenswrapper[4982]: I0224 15:17:07.985678 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.031165 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.095603 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.114605 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 24 15:17:08 crc kubenswrapper[4982]: E0224 15:17:08.115156 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-api" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115173 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-api" Feb 24 15:17:08 crc kubenswrapper[4982]: E0224 15:17:08.115202 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-listener" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115208 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-listener" Feb 24 15:17:08 crc kubenswrapper[4982]: E0224 15:17:08.115216 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91be378-9ad1-4685-b2b2-3f96f2f6fc88" containerName="nova-manage" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115222 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91be378-9ad1-4685-b2b2-3f96f2f6fc88" containerName="nova-manage" Feb 24 15:17:08 crc kubenswrapper[4982]: E0224 15:17:08.115240 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-notifier" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115245 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-notifier" Feb 24 15:17:08 crc kubenswrapper[4982]: E0224 15:17:08.115276 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-evaluator" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115282 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-evaluator" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115521 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-evaluator" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115538 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d91be378-9ad1-4685-b2b2-3f96f2f6fc88" containerName="nova-manage" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115550 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-listener" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115559 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-notifier" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.115572 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" containerName="aodh-api" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.117736 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.121645 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.121680 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-2qctw" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.122140 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.122372 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.122577 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.124981 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.178417 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.178815 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-log" containerID="cri-o://c4e8587421a03f825e0e005f921d1a2211fdb392807507442c2804533898b227" gracePeriod=30 Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.179612 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-api" containerID="cri-o://a947e931d55f05d781efbb296622a1da5b99fc16d2ab470d32d9e8b049a01729" gracePeriod=30 Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.202378 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-config-data\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.202463 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.202512 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgf5w\" (UniqueName: \"kubernetes.io/projected/c6885b08-9767-422f-8833-6b09f9401bfd-kube-api-access-dgf5w\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.202559 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-scripts\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.203239 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-public-tls-certs\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.203604 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.209420 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.209637 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="09c9766c-cdcd-422a-968e-bab2d1e067d6" containerName="nova-scheduler-scheduler" containerID="cri-o://8abf25f75c60bd84462d1a0fdfedb71834204e717b1e235ba4196b1d3bf6dd9d" gracePeriod=30 Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.225984 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.226192 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-log" containerID="cri-o://47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3" gracePeriod=30 Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.226626 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-metadata" containerID="cri-o://18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4" gracePeriod=30 Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.305919 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.306063 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-config-data\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.306148 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.306199 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgf5w\" (UniqueName: \"kubernetes.io/projected/c6885b08-9767-422f-8833-6b09f9401bfd-kube-api-access-dgf5w\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.306275 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-scripts\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.306368 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-public-tls-certs\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.311665 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-public-tls-certs\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.312005 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-config-data\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.312494 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.314047 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-scripts\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.321985 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.322207 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgf5w\" (UniqueName: \"kubernetes.io/projected/c6885b08-9767-422f-8833-6b09f9401bfd-kube-api-access-dgf5w\") pod \"aodh-0\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.444002 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.742625 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:17:08 crc kubenswrapper[4982]: I0224 15:17:08.742981 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:17:09 crc kubenswrapper[4982]: I0224 15:17:09.001069 4982 generic.go:334] "Generic (PLEG): container finished" podID="ba7f2184-6692-4763-941a-26043475f64b" containerID="c4e8587421a03f825e0e005f921d1a2211fdb392807507442c2804533898b227" exitCode=143 Feb 24 15:17:09 crc kubenswrapper[4982]: I0224 15:17:09.001196 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7f2184-6692-4763-941a-26043475f64b","Type":"ContainerDied","Data":"c4e8587421a03f825e0e005f921d1a2211fdb392807507442c2804533898b227"} Feb 24 15:17:09 crc kubenswrapper[4982]: I0224 15:17:09.003691 4982 generic.go:334] "Generic (PLEG): container finished" podID="8ea0a117-d934-440d-87d6-72f077c38029" containerID="47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3" exitCode=143 Feb 24 15:17:09 crc kubenswrapper[4982]: I0224 15:17:09.003740 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea0a117-d934-440d-87d6-72f077c38029","Type":"ContainerDied","Data":"47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3"} Feb 24 15:17:09 crc kubenswrapper[4982]: I0224 15:17:09.053573 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 24 15:17:09 crc kubenswrapper[4982]: W0224 15:17:09.055769 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6885b08_9767_422f_8833_6b09f9401bfd.slice/crio-75bee8d61b7b215d508d27661fd973f6b69561652c210bac4b4db336633583fb WatchSource:0}: Error finding container 75bee8d61b7b215d508d27661fd973f6b69561652c210bac4b4db336633583fb: Status 404 returned error can't find the container with id 75bee8d61b7b215d508d27661fd973f6b69561652c210bac4b4db336633583fb Feb 24 15:17:09 crc kubenswrapper[4982]: I0224 15:17:09.160384 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41363a0-fbb1-45bc-8d05-1be8b4a10dac" path="/var/lib/kubelet/pods/b41363a0-fbb1-45bc-8d05-1be8b4a10dac/volumes" Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.019007 4982 generic.go:334] "Generic (PLEG): container finished" podID="09c9766c-cdcd-422a-968e-bab2d1e067d6" containerID="8abf25f75c60bd84462d1a0fdfedb71834204e717b1e235ba4196b1d3bf6dd9d" exitCode=0 Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.019263 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09c9766c-cdcd-422a-968e-bab2d1e067d6","Type":"ContainerDied","Data":"8abf25f75c60bd84462d1a0fdfedb71834204e717b1e235ba4196b1d3bf6dd9d"} Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.019292 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09c9766c-cdcd-422a-968e-bab2d1e067d6","Type":"ContainerDied","Data":"34860bcaec00356b9f8825919efbccdb704540181a11966220913d371474a47f"} Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.019305 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34860bcaec00356b9f8825919efbccdb704540181a11966220913d371474a47f" Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.023910 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerStarted","Data":"75bee8d61b7b215d508d27661fd973f6b69561652c210bac4b4db336633583fb"} Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.105409 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.179547 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-combined-ca-bundle\") pod \"09c9766c-cdcd-422a-968e-bab2d1e067d6\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.179796 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-config-data\") pod \"09c9766c-cdcd-422a-968e-bab2d1e067d6\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.180104 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24rjb\" (UniqueName: \"kubernetes.io/projected/09c9766c-cdcd-422a-968e-bab2d1e067d6-kube-api-access-24rjb\") pod \"09c9766c-cdcd-422a-968e-bab2d1e067d6\" (UID: \"09c9766c-cdcd-422a-968e-bab2d1e067d6\") " Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.186648 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c9766c-cdcd-422a-968e-bab2d1e067d6-kube-api-access-24rjb" (OuterVolumeSpecName: "kube-api-access-24rjb") pod "09c9766c-cdcd-422a-968e-bab2d1e067d6" (UID: "09c9766c-cdcd-422a-968e-bab2d1e067d6"). InnerVolumeSpecName "kube-api-access-24rjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.216648 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09c9766c-cdcd-422a-968e-bab2d1e067d6" (UID: "09c9766c-cdcd-422a-968e-bab2d1e067d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.218326 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-config-data" (OuterVolumeSpecName: "config-data") pod "09c9766c-cdcd-422a-968e-bab2d1e067d6" (UID: "09c9766c-cdcd-422a-968e-bab2d1e067d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.284796 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.284841 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c9766c-cdcd-422a-968e-bab2d1e067d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:10 crc kubenswrapper[4982]: I0224 15:17:10.284854 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24rjb\" (UniqueName: \"kubernetes.io/projected/09c9766c-cdcd-422a-968e-bab2d1e067d6-kube-api-access-24rjb\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.037226 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerStarted","Data":"a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7"} Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.037600 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerStarted","Data":"7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19"} Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.037269 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.098884 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.179512 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.190668 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:17:11 crc kubenswrapper[4982]: E0224 15:17:11.192263 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c9766c-cdcd-422a-968e-bab2d1e067d6" containerName="nova-scheduler-scheduler" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.192289 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c9766c-cdcd-422a-968e-bab2d1e067d6" containerName="nova-scheduler-scheduler" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.192601 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c9766c-cdcd-422a-968e-bab2d1e067d6" containerName="nova-scheduler-scheduler" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.193488 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.198561 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.205194 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.309886 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f62tq\" (UniqueName: \"kubernetes.io/projected/0c763ae1-e628-4088-b3da-1a4392f1cb37-kube-api-access-f62tq\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.310440 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c763ae1-e628-4088-b3da-1a4392f1cb37-config-data\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.310596 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c763ae1-e628-4088-b3da-1a4392f1cb37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.413220 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f62tq\" (UniqueName: \"kubernetes.io/projected/0c763ae1-e628-4088-b3da-1a4392f1cb37-kube-api-access-f62tq\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.414169 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c763ae1-e628-4088-b3da-1a4392f1cb37-config-data\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.414234 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c763ae1-e628-4088-b3da-1a4392f1cb37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.419313 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c763ae1-e628-4088-b3da-1a4392f1cb37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.420105 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c763ae1-e628-4088-b3da-1a4392f1cb37-config-data\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.443332 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f62tq\" (UniqueName: \"kubernetes.io/projected/0c763ae1-e628-4088-b3da-1a4392f1cb37-kube-api-access-f62tq\") pod \"nova-scheduler-0\" (UID: \"0c763ae1-e628-4088-b3da-1a4392f1cb37\") " pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.520542 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 15:17:11 crc kubenswrapper[4982]: I0224 15:17:11.914804 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.038392 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-combined-ca-bundle\") pod \"8ea0a117-d934-440d-87d6-72f077c38029\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.038452 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea0a117-d934-440d-87d6-72f077c38029-logs\") pod \"8ea0a117-d934-440d-87d6-72f077c38029\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.038532 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m96fl\" (UniqueName: \"kubernetes.io/projected/8ea0a117-d934-440d-87d6-72f077c38029-kube-api-access-m96fl\") pod \"8ea0a117-d934-440d-87d6-72f077c38029\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.038615 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-config-data\") pod \"8ea0a117-d934-440d-87d6-72f077c38029\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.038706 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-nova-metadata-tls-certs\") pod \"8ea0a117-d934-440d-87d6-72f077c38029\" (UID: \"8ea0a117-d934-440d-87d6-72f077c38029\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.040387 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea0a117-d934-440d-87d6-72f077c38029-logs" (OuterVolumeSpecName: "logs") pod "8ea0a117-d934-440d-87d6-72f077c38029" (UID: "8ea0a117-d934-440d-87d6-72f077c38029"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.046905 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea0a117-d934-440d-87d6-72f077c38029-kube-api-access-m96fl" (OuterVolumeSpecName: "kube-api-access-m96fl") pod "8ea0a117-d934-440d-87d6-72f077c38029" (UID: "8ea0a117-d934-440d-87d6-72f077c38029"). InnerVolumeSpecName "kube-api-access-m96fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.132693 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ea0a117-d934-440d-87d6-72f077c38029" (UID: "8ea0a117-d934-440d-87d6-72f077c38029"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.146035 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.146115 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea0a117-d934-440d-87d6-72f077c38029-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.146127 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m96fl\" (UniqueName: \"kubernetes.io/projected/8ea0a117-d934-440d-87d6-72f077c38029-kube-api-access-m96fl\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.166861 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8ea0a117-d934-440d-87d6-72f077c38029" (UID: "8ea0a117-d934-440d-87d6-72f077c38029"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.185647 4982 generic.go:334] "Generic (PLEG): container finished" podID="8ea0a117-d934-440d-87d6-72f077c38029" containerID="18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4" exitCode=0 Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.185782 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.186557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea0a117-d934-440d-87d6-72f077c38029","Type":"ContainerDied","Data":"18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4"} Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.186608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea0a117-d934-440d-87d6-72f077c38029","Type":"ContainerDied","Data":"ed06f96391d92a2118d91c86e5edf39172a58d2963baf6d5d35c395cad9904fd"} Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.186628 4982 scope.go:117] "RemoveContainer" containerID="18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.244768 4982 generic.go:334] "Generic (PLEG): container finished" podID="ba7f2184-6692-4763-941a-26043475f64b" containerID="a947e931d55f05d781efbb296622a1da5b99fc16d2ab470d32d9e8b049a01729" exitCode=0 Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.244859 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7f2184-6692-4763-941a-26043475f64b","Type":"ContainerDied","Data":"a947e931d55f05d781efbb296622a1da5b99fc16d2ab470d32d9e8b049a01729"} Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.253115 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.261785 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-config-data" (OuterVolumeSpecName: "config-data") pod "8ea0a117-d934-440d-87d6-72f077c38029" (UID: "8ea0a117-d934-440d-87d6-72f077c38029"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.273085 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.273123 4982 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea0a117-d934-440d-87d6-72f077c38029-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.361042 4982 scope.go:117] "RemoveContainer" containerID="47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.410216 4982 scope.go:117] "RemoveContainer" containerID="18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4" Feb 24 15:17:12 crc kubenswrapper[4982]: E0224 15:17:12.410579 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4\": container with ID starting with 18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4 not found: ID does not exist" containerID="18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.410647 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4"} err="failed to get container status \"18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4\": rpc error: code = NotFound desc = could not find container \"18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4\": container with ID starting with 18c53639b4eef4061f338d89b1ad7516a17c85e6c55a3bf62263b7cc7fdc1bb4 not found: ID does not exist" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.410676 4982 scope.go:117] "RemoveContainer" containerID="47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3" Feb 24 15:17:12 crc kubenswrapper[4982]: E0224 15:17:12.410975 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3\": container with ID starting with 47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3 not found: ID does not exist" containerID="47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.411004 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3"} err="failed to get container status \"47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3\": rpc error: code = NotFound desc = could not find container \"47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3\": container with ID starting with 47e27c8dbed085384ed0a7e2d798495deaa686388134a62306c702528e879de3 not found: ID does not exist" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.544290 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.557233 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.574097 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:17:12 crc kubenswrapper[4982]: E0224 15:17:12.574817 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-metadata" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.574844 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-metadata" Feb 24 15:17:12 crc kubenswrapper[4982]: E0224 15:17:12.574867 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-log" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.574878 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-log" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.575232 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-metadata" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.575264 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea0a117-d934-440d-87d6-72f077c38029" containerName="nova-metadata-log" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.581741 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.585686 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.586363 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.591095 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.646444 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.695260 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.695387 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgkcd\" (UniqueName: \"kubernetes.io/projected/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-kube-api-access-xgkcd\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.695527 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-config-data\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.695721 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.695767 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-logs\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.796983 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxqnd\" (UniqueName: \"kubernetes.io/projected/ba7f2184-6692-4763-941a-26043475f64b-kube-api-access-xxqnd\") pod \"ba7f2184-6692-4763-941a-26043475f64b\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.797387 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-public-tls-certs\") pod \"ba7f2184-6692-4763-941a-26043475f64b\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.797421 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7f2184-6692-4763-941a-26043475f64b-logs\") pod \"ba7f2184-6692-4763-941a-26043475f64b\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.797480 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-internal-tls-certs\") pod \"ba7f2184-6692-4763-941a-26043475f64b\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.797584 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-combined-ca-bundle\") pod \"ba7f2184-6692-4763-941a-26043475f64b\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.797997 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-config-data\") pod \"ba7f2184-6692-4763-941a-26043475f64b\" (UID: \"ba7f2184-6692-4763-941a-26043475f64b\") " Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.798078 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7f2184-6692-4763-941a-26043475f64b-logs" (OuterVolumeSpecName: "logs") pod "ba7f2184-6692-4763-941a-26043475f64b" (UID: "ba7f2184-6692-4763-941a-26043475f64b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.798587 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgkcd\" (UniqueName: \"kubernetes.io/projected/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-kube-api-access-xgkcd\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.798711 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-config-data\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.798793 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.798834 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-logs\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.798955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.799068 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7f2184-6692-4763-941a-26043475f64b-logs\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.799692 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-logs\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.805085 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-config-data\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.807934 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba7f2184-6692-4763-941a-26043475f64b-kube-api-access-xxqnd" (OuterVolumeSpecName: "kube-api-access-xxqnd") pod "ba7f2184-6692-4763-941a-26043475f64b" (UID: "ba7f2184-6692-4763-941a-26043475f64b"). InnerVolumeSpecName "kube-api-access-xxqnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.807979 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.808116 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.840205 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgkcd\" (UniqueName: \"kubernetes.io/projected/aa8b9e96-e44a-4a46-87c6-0a473fc97e22-kube-api-access-xgkcd\") pod \"nova-metadata-0\" (UID: \"aa8b9e96-e44a-4a46-87c6-0a473fc97e22\") " pod="openstack/nova-metadata-0" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.840847 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba7f2184-6692-4763-941a-26043475f64b" (UID: "ba7f2184-6692-4763-941a-26043475f64b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.882443 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-config-data" (OuterVolumeSpecName: "config-data") pod "ba7f2184-6692-4763-941a-26043475f64b" (UID: "ba7f2184-6692-4763-941a-26043475f64b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.887168 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ba7f2184-6692-4763-941a-26043475f64b" (UID: "ba7f2184-6692-4763-941a-26043475f64b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.901582 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.901613 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxqnd\" (UniqueName: \"kubernetes.io/projected/ba7f2184-6692-4763-941a-26043475f64b-kube-api-access-xxqnd\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.901626 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.901635 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.910291 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ba7f2184-6692-4763-941a-26043475f64b" (UID: "ba7f2184-6692-4763-941a-26043475f64b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:12 crc kubenswrapper[4982]: I0224 15:17:12.943431 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.004268 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7f2184-6692-4763-941a-26043475f64b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.168251 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c9766c-cdcd-422a-968e-bab2d1e067d6" path="/var/lib/kubelet/pods/09c9766c-cdcd-422a-968e-bab2d1e067d6/volumes" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.169252 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea0a117-d934-440d-87d6-72f077c38029" path="/var/lib/kubelet/pods/8ea0a117-d934-440d-87d6-72f077c38029/volumes" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.301997 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba7f2184-6692-4763-941a-26043475f64b","Type":"ContainerDied","Data":"39fb3ce955161f2aaf5a6b907cc958b4a4627ff4de4745db12ab91cceebb5942"} Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.302013 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.302057 4982 scope.go:117] "RemoveContainer" containerID="a947e931d55f05d781efbb296622a1da5b99fc16d2ab470d32d9e8b049a01729" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.331723 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c763ae1-e628-4088-b3da-1a4392f1cb37","Type":"ContainerStarted","Data":"9d5b440f6e6b06dd8177a623a74864f0d92829e5d12d49269f714db7b9a26bab"} Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.331778 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c763ae1-e628-4088-b3da-1a4392f1cb37","Type":"ContainerStarted","Data":"140f73397e3c7895b9f5fca0d63fecc6fc47d41abc1741439d2925a658d725ad"} Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.348997 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerStarted","Data":"c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e"} Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.385688 4982 scope.go:117] "RemoveContainer" containerID="c4e8587421a03f825e0e005f921d1a2211fdb392807507442c2804533898b227" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.409934 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.464552 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.479721 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.479697498 podStartE2EDuration="2.479697498s" podCreationTimestamp="2026-02-24 15:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:17:13.353426619 +0000 UTC m=+1694.972485112" watchObservedRunningTime="2026-02-24 15:17:13.479697498 +0000 UTC m=+1695.098755991" Feb 24 15:17:13 crc kubenswrapper[4982]: W0224 15:17:13.500850 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa8b9e96_e44a_4a46_87c6_0a473fc97e22.slice/crio-71a8dd581033b830a3f31c9305870fa8ce91fb0a4bf678bb86ccf8fdd6dc2ef8 WatchSource:0}: Error finding container 71a8dd581033b830a3f31c9305870fa8ce91fb0a4bf678bb86ccf8fdd6dc2ef8: Status 404 returned error can't find the container with id 71a8dd581033b830a3f31c9305870fa8ce91fb0a4bf678bb86ccf8fdd6dc2ef8 Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.502461 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 15:17:13 crc kubenswrapper[4982]: E0224 15:17:13.502968 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-log" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.502980 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-log" Feb 24 15:17:13 crc kubenswrapper[4982]: E0224 15:17:13.503008 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-api" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.503014 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-api" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.503217 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-api" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.503234 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7f2184-6692-4763-941a-26043475f64b" containerName="nova-api-log" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.504453 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.514796 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.514944 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.515046 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.518791 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.546834 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.622691 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.622746 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx79d\" (UniqueName: \"kubernetes.io/projected/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-kube-api-access-nx79d\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.622940 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-logs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.623021 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.623092 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-config-data\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.623234 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.731789 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-logs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.731901 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.732063 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-config-data\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.732142 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.732257 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-logs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.732529 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.732592 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx79d\" (UniqueName: \"kubernetes.io/projected/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-kube-api-access-nx79d\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.738235 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.738250 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.739066 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.739200 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-config-data\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.756451 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx79d\" (UniqueName: \"kubernetes.io/projected/aac85fdb-fa1c-47c7-8904-d72aa10f69ae-kube-api-access-nx79d\") pod \"nova-api-0\" (UID: \"aac85fdb-fa1c-47c7-8904-d72aa10f69ae\") " pod="openstack/nova-api-0" Feb 24 15:17:13 crc kubenswrapper[4982]: I0224 15:17:13.829892 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 15:17:14 crc kubenswrapper[4982]: I0224 15:17:14.338072 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 15:17:14 crc kubenswrapper[4982]: W0224 15:17:14.339126 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaac85fdb_fa1c_47c7_8904_d72aa10f69ae.slice/crio-f3189d295ed7ec46d1dfe02907fb89f12f94e1b50ec8d167fbede75d76bf8dc7 WatchSource:0}: Error finding container f3189d295ed7ec46d1dfe02907fb89f12f94e1b50ec8d167fbede75d76bf8dc7: Status 404 returned error can't find the container with id f3189d295ed7ec46d1dfe02907fb89f12f94e1b50ec8d167fbede75d76bf8dc7 Feb 24 15:17:14 crc kubenswrapper[4982]: I0224 15:17:14.375323 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aac85fdb-fa1c-47c7-8904-d72aa10f69ae","Type":"ContainerStarted","Data":"f3189d295ed7ec46d1dfe02907fb89f12f94e1b50ec8d167fbede75d76bf8dc7"} Feb 24 15:17:14 crc kubenswrapper[4982]: I0224 15:17:14.378626 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerStarted","Data":"a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7"} Feb 24 15:17:14 crc kubenswrapper[4982]: I0224 15:17:14.382991 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa8b9e96-e44a-4a46-87c6-0a473fc97e22","Type":"ContainerStarted","Data":"f3b1516a5796e6541f8a54b5983b438a7910bb782c170568cac8fd1c470b5344"} Feb 24 15:17:14 crc kubenswrapper[4982]: I0224 15:17:14.383054 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa8b9e96-e44a-4a46-87c6-0a473fc97e22","Type":"ContainerStarted","Data":"71a8dd581033b830a3f31c9305870fa8ce91fb0a4bf678bb86ccf8fdd6dc2ef8"} Feb 24 15:17:14 crc kubenswrapper[4982]: I0224 15:17:14.412142 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.753424737 podStartE2EDuration="6.412101348s" podCreationTimestamp="2026-02-24 15:17:08 +0000 UTC" firstStartedPulling="2026-02-24 15:17:09.057696679 +0000 UTC m=+1690.676755172" lastFinishedPulling="2026-02-24 15:17:12.71637328 +0000 UTC m=+1694.335431783" observedRunningTime="2026-02-24 15:17:14.406598868 +0000 UTC m=+1696.025657381" watchObservedRunningTime="2026-02-24 15:17:14.412101348 +0000 UTC m=+1696.031159861" Feb 24 15:17:15 crc kubenswrapper[4982]: I0224 15:17:15.160036 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba7f2184-6692-4763-941a-26043475f64b" path="/var/lib/kubelet/pods/ba7f2184-6692-4763-941a-26043475f64b/volumes" Feb 24 15:17:15 crc kubenswrapper[4982]: I0224 15:17:15.395753 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aac85fdb-fa1c-47c7-8904-d72aa10f69ae","Type":"ContainerStarted","Data":"710bc4d16591e3b96288b9430a0c48ebe55f68a575f3365ac404e568b78a9aad"} Feb 24 15:17:15 crc kubenswrapper[4982]: I0224 15:17:15.395822 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aac85fdb-fa1c-47c7-8904-d72aa10f69ae","Type":"ContainerStarted","Data":"bddba30059039728a5c760a8e7f38c79035fa62c6bd0d272665f29d9d5425053"} Feb 24 15:17:15 crc kubenswrapper[4982]: I0224 15:17:15.399854 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa8b9e96-e44a-4a46-87c6-0a473fc97e22","Type":"ContainerStarted","Data":"91bc7d5200eb2baabee700ecfaa5f80bfc95b7e54908535677747ee7a2abfe33"} Feb 24 15:17:15 crc kubenswrapper[4982]: I0224 15:17:15.433913 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.433886645 podStartE2EDuration="2.433886645s" podCreationTimestamp="2026-02-24 15:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:17:15.418355803 +0000 UTC m=+1697.037414316" watchObservedRunningTime="2026-02-24 15:17:15.433886645 +0000 UTC m=+1697.052945148" Feb 24 15:17:15 crc kubenswrapper[4982]: I0224 15:17:15.451016 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.450998109 podStartE2EDuration="3.450998109s" podCreationTimestamp="2026-02-24 15:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:17:15.437435292 +0000 UTC m=+1697.056493785" watchObservedRunningTime="2026-02-24 15:17:15.450998109 +0000 UTC m=+1697.070056602" Feb 24 15:17:16 crc kubenswrapper[4982]: I0224 15:17:16.522641 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 15:17:17 crc kubenswrapper[4982]: I0224 15:17:17.943725 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 15:17:17 crc kubenswrapper[4982]: I0224 15:17:17.943798 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 15:17:21 crc kubenswrapper[4982]: I0224 15:17:21.521049 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 24 15:17:21 crc kubenswrapper[4982]: I0224 15:17:21.554917 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 24 15:17:22 crc kubenswrapper[4982]: I0224 15:17:22.523653 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 24 15:17:22 crc kubenswrapper[4982]: I0224 15:17:22.943769 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 15:17:22 crc kubenswrapper[4982]: I0224 15:17:22.944342 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 15:17:23 crc kubenswrapper[4982]: I0224 15:17:23.830782 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 15:17:23 crc kubenswrapper[4982]: I0224 15:17:23.830836 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 15:17:23 crc kubenswrapper[4982]: I0224 15:17:23.964694 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aa8b9e96-e44a-4a46-87c6-0a473fc97e22" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.16:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:17:23 crc kubenswrapper[4982]: I0224 15:17:23.965144 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aa8b9e96-e44a-4a46-87c6-0a473fc97e22" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.16:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 15:17:24 crc kubenswrapper[4982]: I0224 15:17:24.843712 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aac85fdb-fa1c-47c7-8904-d72aa10f69ae" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 15:17:24 crc kubenswrapper[4982]: I0224 15:17:24.843736 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aac85fdb-fa1c-47c7-8904-d72aa10f69ae" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 15:17:25 crc kubenswrapper[4982]: I0224 15:17:25.512675 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 24 15:17:32 crc kubenswrapper[4982]: I0224 15:17:32.951342 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 15:17:32 crc kubenswrapper[4982]: I0224 15:17:32.953846 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 15:17:32 crc kubenswrapper[4982]: I0224 15:17:32.958900 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 15:17:33 crc kubenswrapper[4982]: I0224 15:17:33.643258 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 15:17:33 crc kubenswrapper[4982]: I0224 15:17:33.838626 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 15:17:33 crc kubenswrapper[4982]: I0224 15:17:33.840048 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 15:17:33 crc kubenswrapper[4982]: I0224 15:17:33.840626 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 15:17:33 crc kubenswrapper[4982]: I0224 15:17:33.851978 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 15:17:34 crc kubenswrapper[4982]: I0224 15:17:34.639865 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 15:17:34 crc kubenswrapper[4982]: I0224 15:17:34.654261 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 15:17:38 crc kubenswrapper[4982]: I0224 15:17:38.738748 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:17:38 crc kubenswrapper[4982]: I0224 15:17:38.739662 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.008941 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-9zp9n"] Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.019296 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-9zp9n"] Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.113262 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qf6l6"] Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.115842 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.141198 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qf6l6"] Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.177481 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2295dbe7-ace5-48d8-8952-fd4c3b5ddf99" path="/var/lib/kubelet/pods/2295dbe7-ace5-48d8-8952-fd4c3b5ddf99/volumes" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.207372 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-combined-ca-bundle\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.207787 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-config-data\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.207921 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9h4\" (UniqueName: \"kubernetes.io/projected/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-kube-api-access-4z9h4\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.310839 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-combined-ca-bundle\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.311616 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-config-data\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.311766 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9h4\" (UniqueName: \"kubernetes.io/projected/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-kube-api-access-4z9h4\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.327979 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-combined-ca-bundle\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.328168 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9h4\" (UniqueName: \"kubernetes.io/projected/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-kube-api-access-4z9h4\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.333696 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-config-data\") pod \"heat-db-sync-qf6l6\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.448713 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qf6l6" Feb 24 15:17:45 crc kubenswrapper[4982]: I0224 15:17:45.978862 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qf6l6"] Feb 24 15:17:46 crc kubenswrapper[4982]: I0224 15:17:46.823678 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qf6l6" event={"ID":"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c","Type":"ContainerStarted","Data":"25c45a82f3d517a66250a298457667a2ae7d3a0520037921d9507e2f3c284ec5"} Feb 24 15:17:47 crc kubenswrapper[4982]: I0224 15:17:47.026165 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:17:47 crc kubenswrapper[4982]: I0224 15:17:47.781943 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:17:47 crc kubenswrapper[4982]: I0224 15:17:47.782267 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="ceilometer-central-agent" containerID="cri-o://32f033e5e092bc578edb80cb647aab3897acc283c62cbd7fd52c35570f278ad5" gracePeriod=30 Feb 24 15:17:47 crc kubenswrapper[4982]: I0224 15:17:47.782387 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="ceilometer-notification-agent" containerID="cri-o://deeec8978ce281c35598f8f923d6f772e95e22bea0e0321162e8323072d08281" gracePeriod=30 Feb 24 15:17:47 crc kubenswrapper[4982]: I0224 15:17:47.782408 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="proxy-httpd" containerID="cri-o://4714106cede0199a40fc90b43537e3bfcf5dd6e30a0aaa5a43abbda0584227fb" gracePeriod=30 Feb 24 15:17:47 crc kubenswrapper[4982]: I0224 15:17:47.782391 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="sg-core" containerID="cri-o://e2459aabbda368f05e0b6e092503bd0bef55a379d7b38944a6f33f48a70515b5" gracePeriod=30 Feb 24 15:17:48 crc kubenswrapper[4982]: I0224 15:17:48.356624 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:17:48 crc kubenswrapper[4982]: E0224 15:17:48.392839 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16d339cd_3a99_489f_8d51_f65bbef63ab8.slice/crio-conmon-32f033e5e092bc578edb80cb647aab3897acc283c62cbd7fd52c35570f278ad5.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:17:48 crc kubenswrapper[4982]: E0224 15:17:48.393026 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16d339cd_3a99_489f_8d51_f65bbef63ab8.slice/crio-conmon-32f033e5e092bc578edb80cb647aab3897acc283c62cbd7fd52c35570f278ad5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16d339cd_3a99_489f_8d51_f65bbef63ab8.slice/crio-32f033e5e092bc578edb80cb647aab3897acc283c62cbd7fd52c35570f278ad5.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:17:48 crc kubenswrapper[4982]: I0224 15:17:48.852122 4982 generic.go:334] "Generic (PLEG): container finished" podID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerID="4714106cede0199a40fc90b43537e3bfcf5dd6e30a0aaa5a43abbda0584227fb" exitCode=0 Feb 24 15:17:48 crc kubenswrapper[4982]: I0224 15:17:48.852434 4982 generic.go:334] "Generic (PLEG): container finished" podID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerID="e2459aabbda368f05e0b6e092503bd0bef55a379d7b38944a6f33f48a70515b5" exitCode=2 Feb 24 15:17:48 crc kubenswrapper[4982]: I0224 15:17:48.852444 4982 generic.go:334] "Generic (PLEG): container finished" podID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerID="32f033e5e092bc578edb80cb647aab3897acc283c62cbd7fd52c35570f278ad5" exitCode=0 Feb 24 15:17:48 crc kubenswrapper[4982]: I0224 15:17:48.852296 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerDied","Data":"4714106cede0199a40fc90b43537e3bfcf5dd6e30a0aaa5a43abbda0584227fb"} Feb 24 15:17:48 crc kubenswrapper[4982]: I0224 15:17:48.852485 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerDied","Data":"e2459aabbda368f05e0b6e092503bd0bef55a379d7b38944a6f33f48a70515b5"} Feb 24 15:17:48 crc kubenswrapper[4982]: I0224 15:17:48.852511 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerDied","Data":"32f033e5e092bc578edb80cb647aab3897acc283c62cbd7fd52c35570f278ad5"} Feb 24 15:17:50 crc kubenswrapper[4982]: I0224 15:17:50.891373 4982 generic.go:334] "Generic (PLEG): container finished" podID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerID="deeec8978ce281c35598f8f923d6f772e95e22bea0e0321162e8323072d08281" exitCode=0 Feb 24 15:17:50 crc kubenswrapper[4982]: I0224 15:17:50.891965 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerDied","Data":"deeec8978ce281c35598f8f923d6f772e95e22bea0e0321162e8323072d08281"} Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.358324 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.480087 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-scripts\") pod \"16d339cd-3a99-489f-8d51-f65bbef63ab8\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.480181 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-sg-core-conf-yaml\") pod \"16d339cd-3a99-489f-8d51-f65bbef63ab8\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.480236 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-config-data\") pod \"16d339cd-3a99-489f-8d51-f65bbef63ab8\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.480329 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-combined-ca-bundle\") pod \"16d339cd-3a99-489f-8d51-f65bbef63ab8\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.480355 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-ceilometer-tls-certs\") pod \"16d339cd-3a99-489f-8d51-f65bbef63ab8\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.480396 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzhff\" (UniqueName: \"kubernetes.io/projected/16d339cd-3a99-489f-8d51-f65bbef63ab8-kube-api-access-tzhff\") pod \"16d339cd-3a99-489f-8d51-f65bbef63ab8\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.480425 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-run-httpd\") pod \"16d339cd-3a99-489f-8d51-f65bbef63ab8\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.480544 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-log-httpd\") pod \"16d339cd-3a99-489f-8d51-f65bbef63ab8\" (UID: \"16d339cd-3a99-489f-8d51-f65bbef63ab8\") " Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.482002 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "16d339cd-3a99-489f-8d51-f65bbef63ab8" (UID: "16d339cd-3a99-489f-8d51-f65bbef63ab8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.482207 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "16d339cd-3a99-489f-8d51-f65bbef63ab8" (UID: "16d339cd-3a99-489f-8d51-f65bbef63ab8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.489777 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-scripts" (OuterVolumeSpecName: "scripts") pod "16d339cd-3a99-489f-8d51-f65bbef63ab8" (UID: "16d339cd-3a99-489f-8d51-f65bbef63ab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.524073 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d339cd-3a99-489f-8d51-f65bbef63ab8-kube-api-access-tzhff" (OuterVolumeSpecName: "kube-api-access-tzhff") pod "16d339cd-3a99-489f-8d51-f65bbef63ab8" (UID: "16d339cd-3a99-489f-8d51-f65bbef63ab8"). InnerVolumeSpecName "kube-api-access-tzhff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.557804 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "16d339cd-3a99-489f-8d51-f65bbef63ab8" (UID: "16d339cd-3a99-489f-8d51-f65bbef63ab8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.584792 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.584837 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.584849 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.584862 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzhff\" (UniqueName: \"kubernetes.io/projected/16d339cd-3a99-489f-8d51-f65bbef63ab8-kube-api-access-tzhff\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.584873 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16d339cd-3a99-489f-8d51-f65bbef63ab8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.611694 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "16d339cd-3a99-489f-8d51-f65bbef63ab8" (UID: "16d339cd-3a99-489f-8d51-f65bbef63ab8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.645981 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16d339cd-3a99-489f-8d51-f65bbef63ab8" (UID: "16d339cd-3a99-489f-8d51-f65bbef63ab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.688931 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.688970 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.706229 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-config-data" (OuterVolumeSpecName: "config-data") pod "16d339cd-3a99-489f-8d51-f65bbef63ab8" (UID: "16d339cd-3a99-489f-8d51-f65bbef63ab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.790402 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d339cd-3a99-489f-8d51-f65bbef63ab8-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.912838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16d339cd-3a99-489f-8d51-f65bbef63ab8","Type":"ContainerDied","Data":"c50033188cbd48ef4bec2f8f9c22a6ed8f2f7a4d8dc6ea44135ecb617cff6046"} Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.912919 4982 scope.go:117] "RemoveContainer" containerID="4714106cede0199a40fc90b43537e3bfcf5dd6e30a0aaa5a43abbda0584227fb" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.912925 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.967285 4982 scope.go:117] "RemoveContainer" containerID="e2459aabbda368f05e0b6e092503bd0bef55a379d7b38944a6f33f48a70515b5" Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.976933 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:17:51 crc kubenswrapper[4982]: I0224 15:17:51.995676 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.013231 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:17:52 crc kubenswrapper[4982]: E0224 15:17:52.013825 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="ceilometer-notification-agent" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.013843 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="ceilometer-notification-agent" Feb 24 15:17:52 crc kubenswrapper[4982]: E0224 15:17:52.013861 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="sg-core" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.013868 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="sg-core" Feb 24 15:17:52 crc kubenswrapper[4982]: E0224 15:17:52.013884 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="proxy-httpd" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.013890 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="proxy-httpd" Feb 24 15:17:52 crc kubenswrapper[4982]: E0224 15:17:52.013905 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="ceilometer-central-agent" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.013911 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="ceilometer-central-agent" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.014147 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="ceilometer-notification-agent" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.014178 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="sg-core" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.014191 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="proxy-httpd" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.014205 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" containerName="ceilometer-central-agent" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.016758 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.020393 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.020643 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.020915 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.028329 4982 scope.go:117] "RemoveContainer" containerID="deeec8978ce281c35598f8f923d6f772e95e22bea0e0321162e8323072d08281" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.032993 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.097572 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-config-data\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.097988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9dd449a-d430-42cf-8d1a-492c750fde59-run-httpd\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.098019 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9dd449a-d430-42cf-8d1a-492c750fde59-log-httpd\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.098127 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.098218 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-scripts\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.098238 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.098299 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.098366 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljdx\" (UniqueName: \"kubernetes.io/projected/f9dd449a-d430-42cf-8d1a-492c750fde59-kube-api-access-lljdx\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.099767 4982 scope.go:117] "RemoveContainer" containerID="32f033e5e092bc578edb80cb647aab3897acc283c62cbd7fd52c35570f278ad5" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.199425 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-scripts\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.199474 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.199531 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.199558 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lljdx\" (UniqueName: \"kubernetes.io/projected/f9dd449a-d430-42cf-8d1a-492c750fde59-kube-api-access-lljdx\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.199637 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-config-data\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.199681 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9dd449a-d430-42cf-8d1a-492c750fde59-run-httpd\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.199698 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9dd449a-d430-42cf-8d1a-492c750fde59-log-httpd\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.199774 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.201280 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9dd449a-d430-42cf-8d1a-492c750fde59-run-httpd\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.201890 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9dd449a-d430-42cf-8d1a-492c750fde59-log-httpd\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.206787 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-config-data\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.207553 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.207749 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.208272 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.208395 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9dd449a-d430-42cf-8d1a-492c750fde59-scripts\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.220887 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lljdx\" (UniqueName: \"kubernetes.io/projected/f9dd449a-d430-42cf-8d1a-492c750fde59-kube-api-access-lljdx\") pod \"ceilometer-0\" (UID: \"f9dd449a-d430-42cf-8d1a-492c750fde59\") " pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.350135 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.624587 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.652945 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="rabbitmq" containerID="cri-o://c2aff8a7f1dc2f2006eea95379acfeffb3dc9d305e76fa39550d666641993f24" gracePeriod=604795 Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.944318 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.135:5671: connect: connection refused" Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.959382 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerName="rabbitmq" containerID="cri-o://60cbd02be2e19922f7155b97a44cb40ea052b45a59a5b3d9adb2b51e3945fad6" gracePeriod=604796 Feb 24 15:17:52 crc kubenswrapper[4982]: I0224 15:17:52.977148 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 24 15:17:53 crc kubenswrapper[4982]: I0224 15:17:53.180898 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d339cd-3a99-489f-8d51-f65bbef63ab8" path="/var/lib/kubelet/pods/16d339cd-3a99-489f-8d51-f65bbef63ab8/volumes" Feb 24 15:17:53 crc kubenswrapper[4982]: I0224 15:17:53.951696 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9dd449a-d430-42cf-8d1a-492c750fde59","Type":"ContainerStarted","Data":"35070ea43e18b3da04cf6c09db89e22187143b9c44a51da673504bd9e085d7a1"} Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.075318 4982 generic.go:334] "Generic (PLEG): container finished" podID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerID="60cbd02be2e19922f7155b97a44cb40ea052b45a59a5b3d9adb2b51e3945fad6" exitCode=0 Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.075840 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6","Type":"ContainerDied","Data":"60cbd02be2e19922f7155b97a44cb40ea052b45a59a5b3d9adb2b51e3945fad6"} Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.087301 4982 generic.go:334] "Generic (PLEG): container finished" podID="513f6549-901c-4faf-9011-af95fe7398ae" containerID="c2aff8a7f1dc2f2006eea95379acfeffb3dc9d305e76fa39550d666641993f24" exitCode=0 Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.087343 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"513f6549-901c-4faf-9011-af95fe7398ae","Type":"ContainerDied","Data":"c2aff8a7f1dc2f2006eea95379acfeffb3dc9d305e76fa39550d666641993f24"} Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.225760 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532438-bk652"] Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.227991 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532438-bk652" Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.231168 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.231361 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.231551 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.246575 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fmws\" (UniqueName: \"kubernetes.io/projected/7d1f1796-cccb-4a17-b169-e3240d7e884d-kube-api-access-9fmws\") pod \"auto-csr-approver-29532438-bk652\" (UID: \"7d1f1796-cccb-4a17-b169-e3240d7e884d\") " pod="openshift-infra/auto-csr-approver-29532438-bk652" Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.263859 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532438-bk652"] Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.353276 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fmws\" (UniqueName: \"kubernetes.io/projected/7d1f1796-cccb-4a17-b169-e3240d7e884d-kube-api-access-9fmws\") pod \"auto-csr-approver-29532438-bk652\" (UID: \"7d1f1796-cccb-4a17-b169-e3240d7e884d\") " pod="openshift-infra/auto-csr-approver-29532438-bk652" Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.374450 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fmws\" (UniqueName: \"kubernetes.io/projected/7d1f1796-cccb-4a17-b169-e3240d7e884d-kube-api-access-9fmws\") pod \"auto-csr-approver-29532438-bk652\" (UID: \"7d1f1796-cccb-4a17-b169-e3240d7e884d\") " pod="openshift-infra/auto-csr-approver-29532438-bk652" Feb 24 15:18:00 crc kubenswrapper[4982]: I0224 15:18:00.601353 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532438-bk652" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.624999 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.756441 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.819277 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-erlang-cookie\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.819437 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-tls\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.819577 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-plugins-conf\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.819715 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-plugins\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.820941 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.821010 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746sb\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-kube-api-access-746sb\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.821056 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-config-data\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.821090 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-pod-info\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.821163 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-erlang-cookie-secret\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.821197 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-confd\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.821274 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-server-conf\") pod \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\" (UID: \"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6\") " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.845419 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.847747 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.880759 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.900595 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-pod-info" (OuterVolumeSpecName: "pod-info") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.904927 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-kube-api-access-746sb" (OuterVolumeSpecName: "kube-api-access-746sb") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "kube-api-access-746sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.907144 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.907338 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.946255 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a" (OuterVolumeSpecName: "persistence") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "pvc-5154063a-8201-4c92-94b9-907fa3b1e59a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.948774 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.948805 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.948825 4982 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.948836 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.948872 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") on node \"crc\" " Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.948889 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746sb\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-kube-api-access-746sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.948908 4982 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-pod-info\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:02 crc kubenswrapper[4982]: I0224 15:18:02.948930 4982 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.034638 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-config-data" (OuterVolumeSpecName: "config-data") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.038309 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.038431 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5154063a-8201-4c92-94b9-907fa3b1e59a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a") on node "crc" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.050751 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.050796 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.113943 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-server-conf" (OuterVolumeSpecName: "server-conf") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.136478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6","Type":"ContainerDied","Data":"0a2f945a404222b50fa7c2998f8dd7031fcbda9c30b1a9674e1e79ead812a4fb"} Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.136538 4982 scope.go:117] "RemoveContainer" containerID="60cbd02be2e19922f7155b97a44cb40ea052b45a59a5b3d9adb2b51e3945fad6" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.136690 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.152597 4982 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-server-conf\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.177732 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" (UID: "87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.255753 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.486663 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.514878 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.564683 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:18:03 crc kubenswrapper[4982]: E0224 15:18:03.565376 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerName="rabbitmq" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.565399 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerName="rabbitmq" Feb 24 15:18:03 crc kubenswrapper[4982]: E0224 15:18:03.565436 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerName="setup-container" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.565445 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerName="setup-container" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.565793 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" containerName="rabbitmq" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.567484 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.573784 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.574185 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.574420 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.574632 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.574839 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.575082 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.575661 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tnb2z" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.581851 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.669987 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670100 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc18fffb-2c78-4097-8145-143bf44b11dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670241 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670474 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zln64\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-kube-api-access-zln64\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670536 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670562 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670606 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670635 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670691 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.670731 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc18fffb-2c78-4097-8145-143bf44b11dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.772990 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773385 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc18fffb-2c78-4097-8145-143bf44b11dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773434 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773485 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc18fffb-2c78-4097-8145-143bf44b11dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773621 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773682 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773736 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zln64\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-kube-api-access-zln64\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773778 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773795 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773825 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.773845 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.774827 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.775053 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.791775 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.795031 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.796696 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.796731 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7b8d62671965b6040285a6764c0fa44cceb017a469216241927199c14219dda8/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.801486 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc18fffb-2c78-4097-8145-143bf44b11dc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.802851 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.816126 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc18fffb-2c78-4097-8145-143bf44b11dc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.818260 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc18fffb-2c78-4097-8145-143bf44b11dc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.819056 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zln64\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-kube-api-access-zln64\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.819154 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc18fffb-2c78-4097-8145-143bf44b11dc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.929890 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-jsnm7"] Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.932968 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.935237 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.951651 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-jsnm7"] Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.975260 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5154063a-8201-4c92-94b9-907fa3b1e59a\") pod \"rabbitmq-cell1-server-0\" (UID: \"fc18fffb-2c78-4097-8145-143bf44b11dc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.978076 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.978337 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.978456 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-config\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.979780 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szsxg\" (UniqueName: \"kubernetes.io/projected/501451ff-48f4-4ce7-afa6-50220d7f9ab7-kube-api-access-szsxg\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.979873 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.980051 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:03 crc kubenswrapper[4982]: I0224 15:18:03.980140 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.008251 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.105138 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.105283 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.105623 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.105695 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-config\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.105764 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szsxg\" (UniqueName: \"kubernetes.io/projected/501451ff-48f4-4ce7-afa6-50220d7f9ab7-kube-api-access-szsxg\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.105820 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.105959 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.107192 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.108118 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.108407 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.108729 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-config\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.108926 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.109334 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.124040 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szsxg\" (UniqueName: \"kubernetes.io/projected/501451ff-48f4-4ce7-afa6-50220d7f9ab7-kube-api-access-szsxg\") pod \"dnsmasq-dns-5b75489c6f-jsnm7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:04 crc kubenswrapper[4982]: I0224 15:18:04.294184 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:05 crc kubenswrapper[4982]: I0224 15:18:05.158730 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6" path="/var/lib/kubelet/pods/87e1bae7-e3f9-444c-8f6d-1ba64ed6afa6/volumes" Feb 24 15:18:08 crc kubenswrapper[4982]: I0224 15:18:08.738255 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:18:08 crc kubenswrapper[4982]: I0224 15:18:08.738884 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:18:08 crc kubenswrapper[4982]: I0224 15:18:08.738937 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:18:08 crc kubenswrapper[4982]: I0224 15:18:08.740325 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:18:08 crc kubenswrapper[4982]: I0224 15:18:08.740411 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" gracePeriod=600 Feb 24 15:18:09 crc kubenswrapper[4982]: I0224 15:18:09.256701 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" exitCode=0 Feb 24 15:18:09 crc kubenswrapper[4982]: I0224 15:18:09.256774 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7"} Feb 24 15:18:11 crc kubenswrapper[4982]: E0224 15:18:11.482247 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:18:11 crc kubenswrapper[4982]: E0224 15:18:11.510106 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 24 15:18:11 crc kubenswrapper[4982]: E0224 15:18:11.510557 4982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 24 15:18:11 crc kubenswrapper[4982]: E0224 15:18:11.510832 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4z9h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qf6l6_openstack(bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:18:11 crc kubenswrapper[4982]: E0224 15:18:11.512174 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-qf6l6" podUID="bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.586452 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.618666 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-plugins\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.618868 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/513f6549-901c-4faf-9011-af95fe7398ae-pod-info\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.618949 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/513f6549-901c-4faf-9011-af95fe7398ae-erlang-cookie-secret\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.619026 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-plugins-conf\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.619093 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-confd\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.619145 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfvwb\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-kube-api-access-mfvwb\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.619173 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-server-conf\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.619292 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-config-data\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.619387 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-tls\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.620233 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.620284 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-erlang-cookie\") pod \"513f6549-901c-4faf-9011-af95fe7398ae\" (UID: \"513f6549-901c-4faf-9011-af95fe7398ae\") " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.622258 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.624152 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.636568 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.667163 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-kube-api-access-mfvwb" (OuterVolumeSpecName: "kube-api-access-mfvwb") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "kube-api-access-mfvwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.672823 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513f6549-901c-4faf-9011-af95fe7398ae-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.687124 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/513f6549-901c-4faf-9011-af95fe7398ae-pod-info" (OuterVolumeSpecName: "pod-info") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.687165 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-config-data" (OuterVolumeSpecName: "config-data") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.688027 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.727767 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd" (OuterVolumeSpecName: "persistence") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734274 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") on node \"crc\" " Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734334 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734349 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734360 4982 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/513f6549-901c-4faf-9011-af95fe7398ae-pod-info\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734377 4982 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/513f6549-901c-4faf-9011-af95fe7398ae-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734386 4982 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734395 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfvwb\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-kube-api-access-mfvwb\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734403 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.734415 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.806771 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.806948 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd") on node "crc" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.824268 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-server-conf" (OuterVolumeSpecName: "server-conf") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.836902 4982 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/513f6549-901c-4faf-9011-af95fe7398ae-server-conf\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.836934 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.843817 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "513f6549-901c-4faf-9011-af95fe7398ae" (UID: "513f6549-901c-4faf-9011-af95fe7398ae"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:11 crc kubenswrapper[4982]: I0224 15:18:11.939169 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/513f6549-901c-4faf-9011-af95fe7398ae-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.143196 4982 scope.go:117] "RemoveContainer" containerID="ca9e4f5ce81b1272724e6bbc4e2d1009e86f074a2f97121639328c4ceafcb76f" Feb 24 15:18:12 crc kubenswrapper[4982]: E0224 15:18:12.167308 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 24 15:18:12 crc kubenswrapper[4982]: E0224 15:18:12.167384 4982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 24 15:18:12 crc kubenswrapper[4982]: E0224 15:18:12.167599 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h5f8h675h5b9hcfhd7h695h8fh675h7fh6hdfh5chc6h66h55h6bh67fh6ch665h578h5cfh89h6ch578h678h676h65bhch667h59dhfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lljdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f9dd449a-d430-42cf-8d1a-492c750fde59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.253912 4982 scope.go:117] "RemoveContainer" containerID="6f34013f8efc594f596b2fce7ffa43397c7edb98c85b96c1bc1ccf7f7c29f143" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.328437 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"513f6549-901c-4faf-9011-af95fe7398ae","Type":"ContainerDied","Data":"fc8cd992b63ffdbe62c9004d9eebfd642bee0eb9bc70f636237c45bc62eaed59"} Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.328530 4982 scope.go:117] "RemoveContainer" containerID="c2aff8a7f1dc2f2006eea95379acfeffb3dc9d305e76fa39550d666641993f24" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.328465 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.348396 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:18:12 crc kubenswrapper[4982]: E0224 15:18:12.349113 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:18:12 crc kubenswrapper[4982]: E0224 15:18:12.349376 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-qf6l6" podUID="bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.392736 4982 scope.go:117] "RemoveContainer" containerID="d6fc08d870d7fad19cfccc0848ad3a2332a787597366253b2f0903bce152b284" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.458021 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.492364 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.508257 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:18:12 crc kubenswrapper[4982]: E0224 15:18:12.508871 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="rabbitmq" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.508893 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="rabbitmq" Feb 24 15:18:12 crc kubenswrapper[4982]: E0224 15:18:12.508912 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="setup-container" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.508918 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="setup-container" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.509183 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="513f6549-901c-4faf-9011-af95fe7398ae" containerName="rabbitmq" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.522871 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.523014 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.669943 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b68b4733-09b0-4fff-b032-f3339306a04d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.670995 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.671031 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.671091 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.671187 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.671210 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-config-data\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.671272 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.671321 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.671438 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.672326 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b68b4733-09b0-4fff-b032-f3339306a04d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.673383 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnvsn\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-kube-api-access-mnvsn\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.777774 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532438-bk652"] Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.804606 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b68b4733-09b0-4fff-b032-f3339306a04d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.804702 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnvsn\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-kube-api-access-mnvsn\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.804833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b68b4733-09b0-4fff-b032-f3339306a04d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.804909 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.804938 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.804997 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.805093 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-config-data\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.805109 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.805148 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.805193 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.805354 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.809030 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.810240 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-server-conf\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.812231 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.812854 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-config-data\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.813343 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b68b4733-09b0-4fff-b032-f3339306a04d-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.814431 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.816024 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b68b4733-09b0-4fff-b032-f3339306a04d-pod-info\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.818008 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b68b4733-09b0-4fff-b032-f3339306a04d-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.821542 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.824592 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.831068 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.831114 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b9bff04708a599d5f53c572127237f4c9d010110aa6bab1729a82f64f95a892/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.874864 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnvsn\" (UniqueName: \"kubernetes.io/projected/b68b4733-09b0-4fff-b032-f3339306a04d-kube-api-access-mnvsn\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.945835 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3cf0909d-6335-4b00-898e-4f6fe08565cd\") pod \"rabbitmq-server-2\" (UID: \"b68b4733-09b0-4fff-b032-f3339306a04d\") " pod="openstack/rabbitmq-server-2" Feb 24 15:18:12 crc kubenswrapper[4982]: I0224 15:18:12.966862 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-jsnm7"] Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.160868 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.165258 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513f6549-901c-4faf-9011-af95fe7398ae" path="/var/lib/kubelet/pods/513f6549-901c-4faf-9011-af95fe7398ae/volumes" Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.408115 4982 generic.go:334] "Generic (PLEG): container finished" podID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" containerID="72908c03a426801de84b09e268bd5d5a335cbb98556af08df8c7b1a7f104d750" exitCode=0 Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.408181 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" event={"ID":"501451ff-48f4-4ce7-afa6-50220d7f9ab7","Type":"ContainerDied","Data":"72908c03a426801de84b09e268bd5d5a335cbb98556af08df8c7b1a7f104d750"} Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.408514 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" event={"ID":"501451ff-48f4-4ce7-afa6-50220d7f9ab7","Type":"ContainerStarted","Data":"6580ce3a458460e36e23a23636d27ec5963b1715ad7e34982d57e7df53f8840e"} Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.418696 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc18fffb-2c78-4097-8145-143bf44b11dc","Type":"ContainerStarted","Data":"23496523206e990e3a7b89af1a70dea51f5b1f9b22b5757533c272bb6cddfea3"} Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.419820 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532438-bk652" event={"ID":"7d1f1796-cccb-4a17-b169-e3240d7e884d","Type":"ContainerStarted","Data":"5fc2295cb14675f79de779e0b523084bc2cd9c61a69494eb91d640d8462827fc"} Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.423475 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9dd449a-d430-42cf-8d1a-492c750fde59","Type":"ContainerStarted","Data":"ff5566a5d6b4fad8de7a98fe31965aa48f33cd8c93eb1015168f8421eda4ea9a"} Feb 24 15:18:13 crc kubenswrapper[4982]: I0224 15:18:13.674573 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 24 15:18:14 crc kubenswrapper[4982]: I0224 15:18:14.445081 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" event={"ID":"501451ff-48f4-4ce7-afa6-50220d7f9ab7","Type":"ContainerStarted","Data":"e5769082a9822d28a8b6a4212c52676493c55ab640ee1a45baf0dc35abc99bcd"} Feb 24 15:18:14 crc kubenswrapper[4982]: I0224 15:18:14.445530 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:14 crc kubenswrapper[4982]: I0224 15:18:14.448606 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532438-bk652" event={"ID":"7d1f1796-cccb-4a17-b169-e3240d7e884d","Type":"ContainerStarted","Data":"bb0725cb36d0055b514d3b955515b8d9f0084a49e013ea563ebafe325cc20b2c"} Feb 24 15:18:14 crc kubenswrapper[4982]: I0224 15:18:14.449985 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b68b4733-09b0-4fff-b032-f3339306a04d","Type":"ContainerStarted","Data":"4f5a6ffccae9769142b1ce27ac546d01b12315849ff142bb8a9791cc53589cb4"} Feb 24 15:18:14 crc kubenswrapper[4982]: I0224 15:18:14.451964 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9dd449a-d430-42cf-8d1a-492c750fde59","Type":"ContainerStarted","Data":"8dc9e724e27f8c6d6fb4249c8b745c7554551ee7fda6039b5007b8e62f7bce5a"} Feb 24 15:18:14 crc kubenswrapper[4982]: I0224 15:18:14.491863 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" podStartSLOduration=11.491846795 podStartE2EDuration="11.491846795s" podCreationTimestamp="2026-02-24 15:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:18:14.482936084 +0000 UTC m=+1756.101994607" watchObservedRunningTime="2026-02-24 15:18:14.491846795 +0000 UTC m=+1756.110905288" Feb 24 15:18:14 crc kubenswrapper[4982]: I0224 15:18:14.510843 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532438-bk652" podStartSLOduration=13.496562829 podStartE2EDuration="14.510813221s" podCreationTimestamp="2026-02-24 15:18:00 +0000 UTC" firstStartedPulling="2026-02-24 15:18:12.872895242 +0000 UTC m=+1754.491953735" lastFinishedPulling="2026-02-24 15:18:13.887145634 +0000 UTC m=+1755.506204127" observedRunningTime="2026-02-24 15:18:14.499247516 +0000 UTC m=+1756.118306029" watchObservedRunningTime="2026-02-24 15:18:14.510813221 +0000 UTC m=+1756.129871734" Feb 24 15:18:15 crc kubenswrapper[4982]: I0224 15:18:15.468341 4982 generic.go:334] "Generic (PLEG): container finished" podID="7d1f1796-cccb-4a17-b169-e3240d7e884d" containerID="bb0725cb36d0055b514d3b955515b8d9f0084a49e013ea563ebafe325cc20b2c" exitCode=0 Feb 24 15:18:15 crc kubenswrapper[4982]: I0224 15:18:15.468405 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532438-bk652" event={"ID":"7d1f1796-cccb-4a17-b169-e3240d7e884d","Type":"ContainerDied","Data":"bb0725cb36d0055b514d3b955515b8d9f0084a49e013ea563ebafe325cc20b2c"} Feb 24 15:18:15 crc kubenswrapper[4982]: I0224 15:18:15.470892 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc18fffb-2c78-4097-8145-143bf44b11dc","Type":"ContainerStarted","Data":"1a067886fb6c2aa45f12b41436f4ba313b2816141e2d36483fda5f84892198bf"} Feb 24 15:18:16 crc kubenswrapper[4982]: E0224 15:18:16.327223 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f9dd449a-d430-42cf-8d1a-492c750fde59" Feb 24 15:18:16 crc kubenswrapper[4982]: I0224 15:18:16.482440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b68b4733-09b0-4fff-b032-f3339306a04d","Type":"ContainerStarted","Data":"1fdb8e994e6fc45f1d3a9d7bb1fb9729ab7f90114e27372e82d7ef0b76fc51f5"} Feb 24 15:18:16 crc kubenswrapper[4982]: I0224 15:18:16.486002 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9dd449a-d430-42cf-8d1a-492c750fde59","Type":"ContainerStarted","Data":"14225b297fca46ff3d533b5d6540f7b351432a0286ac9db2208d6112138f58c4"} Feb 24 15:18:16 crc kubenswrapper[4982]: I0224 15:18:16.486147 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 24 15:18:16 crc kubenswrapper[4982]: E0224 15:18:16.487195 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f9dd449a-d430-42cf-8d1a-492c750fde59" Feb 24 15:18:16 crc kubenswrapper[4982]: I0224 15:18:16.934702 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532438-bk652" Feb 24 15:18:17 crc kubenswrapper[4982]: I0224 15:18:17.129423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fmws\" (UniqueName: \"kubernetes.io/projected/7d1f1796-cccb-4a17-b169-e3240d7e884d-kube-api-access-9fmws\") pod \"7d1f1796-cccb-4a17-b169-e3240d7e884d\" (UID: \"7d1f1796-cccb-4a17-b169-e3240d7e884d\") " Feb 24 15:18:17 crc kubenswrapper[4982]: I0224 15:18:17.141789 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1f1796-cccb-4a17-b169-e3240d7e884d-kube-api-access-9fmws" (OuterVolumeSpecName: "kube-api-access-9fmws") pod "7d1f1796-cccb-4a17-b169-e3240d7e884d" (UID: "7d1f1796-cccb-4a17-b169-e3240d7e884d"). InnerVolumeSpecName "kube-api-access-9fmws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:17 crc kubenswrapper[4982]: I0224 15:18:17.233336 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fmws\" (UniqueName: \"kubernetes.io/projected/7d1f1796-cccb-4a17-b169-e3240d7e884d-kube-api-access-9fmws\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:17 crc kubenswrapper[4982]: I0224 15:18:17.505268 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532438-bk652" event={"ID":"7d1f1796-cccb-4a17-b169-e3240d7e884d","Type":"ContainerDied","Data":"5fc2295cb14675f79de779e0b523084bc2cd9c61a69494eb91d640d8462827fc"} Feb 24 15:18:17 crc kubenswrapper[4982]: I0224 15:18:17.505312 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fc2295cb14675f79de779e0b523084bc2cd9c61a69494eb91d640d8462827fc" Feb 24 15:18:17 crc kubenswrapper[4982]: I0224 15:18:17.506256 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532438-bk652" Feb 24 15:18:17 crc kubenswrapper[4982]: E0224 15:18:17.508647 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="f9dd449a-d430-42cf-8d1a-492c750fde59" Feb 24 15:18:18 crc kubenswrapper[4982]: I0224 15:18:18.036689 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532432-rp655"] Feb 24 15:18:18 crc kubenswrapper[4982]: I0224 15:18:18.046953 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532432-rp655"] Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.167776 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e68d2ce-821e-4070-affc-79adbc1a34ca" path="/var/lib/kubelet/pods/6e68d2ce-821e-4070-affc-79adbc1a34ca/volumes" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.297741 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.390127 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5bfvp"] Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.390356 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" podUID="545325e5-ce54-4c9c-81b0-162d73c405fe" containerName="dnsmasq-dns" containerID="cri-o://24075c5dcf9d10a6b86e31ecf3aae503e3ca37c42404dc4f4b776407f77b55fe" gracePeriod=10 Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.552312 4982 generic.go:334] "Generic (PLEG): container finished" podID="545325e5-ce54-4c9c-81b0-162d73c405fe" containerID="24075c5dcf9d10a6b86e31ecf3aae503e3ca37c42404dc4f4b776407f77b55fe" exitCode=0 Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.552640 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" event={"ID":"545325e5-ce54-4c9c-81b0-162d73c405fe","Type":"ContainerDied","Data":"24075c5dcf9d10a6b86e31ecf3aae503e3ca37c42404dc4f4b776407f77b55fe"} Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.561044 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-82n9c"] Feb 24 15:18:19 crc kubenswrapper[4982]: E0224 15:18:19.561664 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1f1796-cccb-4a17-b169-e3240d7e884d" containerName="oc" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.561685 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1f1796-cccb-4a17-b169-e3240d7e884d" containerName="oc" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.561975 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1f1796-cccb-4a17-b169-e3240d7e884d" containerName="oc" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.563648 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.597947 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-82n9c"] Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.702813 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9jkg\" (UniqueName: \"kubernetes.io/projected/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-kube-api-access-k9jkg\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.702880 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.702947 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-config\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.703150 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.703271 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.703435 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.703488 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.805658 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.805724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.805815 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9jkg\" (UniqueName: \"kubernetes.io/projected/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-kube-api-access-k9jkg\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.805848 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.805907 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-config\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.805957 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.805995 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.806913 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.807437 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.808004 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.808773 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-config\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.809261 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.812009 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.847866 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9jkg\" (UniqueName: \"kubernetes.io/projected/4c978f3a-f7d2-4a33-a206-b38bf80aae1f-kube-api-access-k9jkg\") pod \"dnsmasq-dns-5d75f767dc-82n9c\" (UID: \"4c978f3a-f7d2-4a33-a206-b38bf80aae1f\") " pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:19 crc kubenswrapper[4982]: I0224 15:18:19.905293 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.045183 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.111010 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2pkn\" (UniqueName: \"kubernetes.io/projected/545325e5-ce54-4c9c-81b0-162d73c405fe-kube-api-access-r2pkn\") pod \"545325e5-ce54-4c9c-81b0-162d73c405fe\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.111138 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-sb\") pod \"545325e5-ce54-4c9c-81b0-162d73c405fe\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.111206 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-config\") pod \"545325e5-ce54-4c9c-81b0-162d73c405fe\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.111290 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-swift-storage-0\") pod \"545325e5-ce54-4c9c-81b0-162d73c405fe\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.111316 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-svc\") pod \"545325e5-ce54-4c9c-81b0-162d73c405fe\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.111394 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-nb\") pod \"545325e5-ce54-4c9c-81b0-162d73c405fe\" (UID: \"545325e5-ce54-4c9c-81b0-162d73c405fe\") " Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.119224 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545325e5-ce54-4c9c-81b0-162d73c405fe-kube-api-access-r2pkn" (OuterVolumeSpecName: "kube-api-access-r2pkn") pod "545325e5-ce54-4c9c-81b0-162d73c405fe" (UID: "545325e5-ce54-4c9c-81b0-162d73c405fe"). InnerVolumeSpecName "kube-api-access-r2pkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.221274 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-config" (OuterVolumeSpecName: "config") pod "545325e5-ce54-4c9c-81b0-162d73c405fe" (UID: "545325e5-ce54-4c9c-81b0-162d73c405fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.230408 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "545325e5-ce54-4c9c-81b0-162d73c405fe" (UID: "545325e5-ce54-4c9c-81b0-162d73c405fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.231583 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.231598 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.231608 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2pkn\" (UniqueName: \"kubernetes.io/projected/545325e5-ce54-4c9c-81b0-162d73c405fe-kube-api-access-r2pkn\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.286651 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "545325e5-ce54-4c9c-81b0-162d73c405fe" (UID: "545325e5-ce54-4c9c-81b0-162d73c405fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.303654 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "545325e5-ce54-4c9c-81b0-162d73c405fe" (UID: "545325e5-ce54-4c9c-81b0-162d73c405fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.334247 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.334285 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.356411 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "545325e5-ce54-4c9c-81b0-162d73c405fe" (UID: "545325e5-ce54-4c9c-81b0-162d73c405fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.436433 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545325e5-ce54-4c9c-81b0-162d73c405fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:20 crc kubenswrapper[4982]: W0224 15:18:20.499797 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c978f3a_f7d2_4a33_a206_b38bf80aae1f.slice/crio-bd80c74bd0a820e9a4015eaa239bd50b2bbc394923b9fdef6aafe0b3525c636e WatchSource:0}: Error finding container bd80c74bd0a820e9a4015eaa239bd50b2bbc394923b9fdef6aafe0b3525c636e: Status 404 returned error can't find the container with id bd80c74bd0a820e9a4015eaa239bd50b2bbc394923b9fdef6aafe0b3525c636e Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.505711 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-82n9c"] Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.582163 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" event={"ID":"545325e5-ce54-4c9c-81b0-162d73c405fe","Type":"ContainerDied","Data":"383b4cd4fd559a69484d3926c858d70a0a11b46f3272c9c26557acdfd19c4de9"} Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.582233 4982 scope.go:117] "RemoveContainer" containerID="24075c5dcf9d10a6b86e31ecf3aae503e3ca37c42404dc4f4b776407f77b55fe" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.583544 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-5bfvp" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.583971 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" event={"ID":"4c978f3a-f7d2-4a33-a206-b38bf80aae1f","Type":"ContainerStarted","Data":"bd80c74bd0a820e9a4015eaa239bd50b2bbc394923b9fdef6aafe0b3525c636e"} Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.695409 4982 scope.go:117] "RemoveContainer" containerID="ed6394bf579785bb6c71dc944b74b58257d3c7e8882e047a25c641e3e6d0f9f3" Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.726591 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5bfvp"] Feb 24 15:18:20 crc kubenswrapper[4982]: I0224 15:18:20.743597 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5bfvp"] Feb 24 15:18:21 crc kubenswrapper[4982]: I0224 15:18:21.172348 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545325e5-ce54-4c9c-81b0-162d73c405fe" path="/var/lib/kubelet/pods/545325e5-ce54-4c9c-81b0-162d73c405fe/volumes" Feb 24 15:18:21 crc kubenswrapper[4982]: I0224 15:18:21.617855 4982 generic.go:334] "Generic (PLEG): container finished" podID="4c978f3a-f7d2-4a33-a206-b38bf80aae1f" containerID="747d91faa54131256e274261347a99011f3e435ab3dd05e9c95a9d9a15a4760f" exitCode=0 Feb 24 15:18:21 crc kubenswrapper[4982]: I0224 15:18:21.618131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" event={"ID":"4c978f3a-f7d2-4a33-a206-b38bf80aae1f","Type":"ContainerDied","Data":"747d91faa54131256e274261347a99011f3e435ab3dd05e9c95a9d9a15a4760f"} Feb 24 15:18:22 crc kubenswrapper[4982]: I0224 15:18:22.631833 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" event={"ID":"4c978f3a-f7d2-4a33-a206-b38bf80aae1f","Type":"ContainerStarted","Data":"736134daaf5ca6761f152557b8749b13a490f69cc8acd74329e0e19c8fc95363"} Feb 24 15:18:22 crc kubenswrapper[4982]: I0224 15:18:22.632385 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:22 crc kubenswrapper[4982]: I0224 15:18:22.671577 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" podStartSLOduration=3.671544978 podStartE2EDuration="3.671544978s" podCreationTimestamp="2026-02-24 15:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:18:22.653618901 +0000 UTC m=+1764.272677414" watchObservedRunningTime="2026-02-24 15:18:22.671544978 +0000 UTC m=+1764.290603491" Feb 24 15:18:23 crc kubenswrapper[4982]: I0224 15:18:23.146849 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:18:23 crc kubenswrapper[4982]: E0224 15:18:23.147466 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:18:25 crc kubenswrapper[4982]: I0224 15:18:25.668237 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qf6l6" event={"ID":"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c","Type":"ContainerStarted","Data":"4f8de8f1edd75dc7844f16517d4b103f53648a8863bb2ceacca8474df343827c"} Feb 24 15:18:25 crc kubenswrapper[4982]: I0224 15:18:25.693530 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-qf6l6" podStartSLOduration=2.326774897 podStartE2EDuration="40.69350736s" podCreationTimestamp="2026-02-24 15:17:45 +0000 UTC" firstStartedPulling="2026-02-24 15:17:45.985595358 +0000 UTC m=+1727.604653851" lastFinishedPulling="2026-02-24 15:18:24.352327821 +0000 UTC m=+1765.971386314" observedRunningTime="2026-02-24 15:18:25.690065727 +0000 UTC m=+1767.309124230" watchObservedRunningTime="2026-02-24 15:18:25.69350736 +0000 UTC m=+1767.312565853" Feb 24 15:18:26 crc kubenswrapper[4982]: I0224 15:18:26.686874 4982 generic.go:334] "Generic (PLEG): container finished" podID="bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" containerID="4f8de8f1edd75dc7844f16517d4b103f53648a8863bb2ceacca8474df343827c" exitCode=0 Feb 24 15:18:26 crc kubenswrapper[4982]: I0224 15:18:26.686976 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qf6l6" event={"ID":"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c","Type":"ContainerDied","Data":"4f8de8f1edd75dc7844f16517d4b103f53648a8863bb2ceacca8474df343827c"} Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.178578 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qf6l6" Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.239207 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-config-data\") pod \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.239272 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-combined-ca-bundle\") pod \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.239309 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z9h4\" (UniqueName: \"kubernetes.io/projected/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-kube-api-access-4z9h4\") pod \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\" (UID: \"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c\") " Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.245761 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-kube-api-access-4z9h4" (OuterVolumeSpecName: "kube-api-access-4z9h4") pod "bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" (UID: "bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c"). InnerVolumeSpecName "kube-api-access-4z9h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.295651 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" (UID: "bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.343810 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.343848 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z9h4\" (UniqueName: \"kubernetes.io/projected/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-kube-api-access-4z9h4\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.369383 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-config-data" (OuterVolumeSpecName: "config-data") pod "bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" (UID: "bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.446209 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.728830 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qf6l6" event={"ID":"bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c","Type":"ContainerDied","Data":"25c45a82f3d517a66250a298457667a2ae7d3a0520037921d9507e2f3c284ec5"} Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.729166 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c45a82f3d517a66250a298457667a2ae7d3a0520037921d9507e2f3c284ec5" Feb 24 15:18:28 crc kubenswrapper[4982]: I0224 15:18:28.728967 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qf6l6" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.194892 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.228880 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.660462 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-847b6bf986-6knzb"] Feb 24 15:18:29 crc kubenswrapper[4982]: E0224 15:18:29.661383 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" containerName="heat-db-sync" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.661402 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" containerName="heat-db-sync" Feb 24 15:18:29 crc kubenswrapper[4982]: E0224 15:18:29.661440 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545325e5-ce54-4c9c-81b0-162d73c405fe" containerName="dnsmasq-dns" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.661448 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="545325e5-ce54-4c9c-81b0-162d73c405fe" containerName="dnsmasq-dns" Feb 24 15:18:29 crc kubenswrapper[4982]: E0224 15:18:29.661458 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545325e5-ce54-4c9c-81b0-162d73c405fe" containerName="init" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.661466 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="545325e5-ce54-4c9c-81b0-162d73c405fe" containerName="init" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.661894 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" containerName="heat-db-sync" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.661924 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="545325e5-ce54-4c9c-81b0-162d73c405fe" containerName="dnsmasq-dns" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.662977 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.669657 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-847b6bf986-6knzb"] Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.787039 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-779fc6f99c-lz6wq"] Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.788810 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.804077 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66f8dcd8d5-cmr9n"] Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.804667 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-config-data-custom\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.804769 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-config-data\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.804795 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-combined-ca-bundle\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.804823 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zknvd\" (UniqueName: \"kubernetes.io/projected/ea93c40b-bf1e-433e-8782-fcb3781962a7-kube-api-access-zknvd\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.805979 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.824390 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-779fc6f99c-lz6wq"] Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.854561 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66f8dcd8d5-cmr9n"] Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.907828 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zknvd\" (UniqueName: \"kubernetes.io/projected/ea93c40b-bf1e-433e-8782-fcb3781962a7-kube-api-access-zknvd\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.907888 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-config-data-custom\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.907927 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-internal-tls-certs\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908012 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-combined-ca-bundle\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908052 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqvg\" (UniqueName: \"kubernetes.io/projected/533504d0-9d95-4bbd-8c8e-10696cd115a6-kube-api-access-xcqvg\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908078 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-public-tls-certs\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908123 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-internal-tls-certs\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908153 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-config-data\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908174 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-config-data-custom\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908190 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-config-data-custom\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908210 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-config-data\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908270 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqt6\" (UniqueName: \"kubernetes.io/projected/f7257c07-0ab1-4030-9c40-c942d0de78f9-kube-api-access-njqt6\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908564 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-public-tls-certs\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908672 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-combined-ca-bundle\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908720 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-config-data\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.908788 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-combined-ca-bundle\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.909663 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-82n9c" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.914581 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-combined-ca-bundle\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.914770 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-config-data-custom\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.930822 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zknvd\" (UniqueName: \"kubernetes.io/projected/ea93c40b-bf1e-433e-8782-fcb3781962a7-kube-api-access-zknvd\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:29 crc kubenswrapper[4982]: I0224 15:18:29.931748 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea93c40b-bf1e-433e-8782-fcb3781962a7-config-data\") pod \"heat-engine-847b6bf986-6knzb\" (UID: \"ea93c40b-bf1e-433e-8782-fcb3781962a7\") " pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.008740 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-jsnm7"] Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.009279 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" podUID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" containerName="dnsmasq-dns" containerID="cri-o://e5769082a9822d28a8b6a4212c52676493c55ab640ee1a45baf0dc35abc99bcd" gracePeriod=10 Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011035 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-config-data\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011091 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-config-data-custom\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011118 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-config-data\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011193 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqt6\" (UniqueName: \"kubernetes.io/projected/f7257c07-0ab1-4030-9c40-c942d0de78f9-kube-api-access-njqt6\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011252 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-public-tls-certs\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011274 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-combined-ca-bundle\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-config-data-custom\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011366 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-internal-tls-certs\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011420 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-combined-ca-bundle\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011491 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcqvg\" (UniqueName: \"kubernetes.io/projected/533504d0-9d95-4bbd-8c8e-10696cd115a6-kube-api-access-xcqvg\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011540 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-public-tls-certs\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.011603 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-internal-tls-certs\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.027292 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-config-data-custom\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.027650 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.028266 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-config-data\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.032060 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-combined-ca-bundle\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.036284 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-config-data-custom\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.037056 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-internal-tls-certs\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.037132 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-combined-ca-bundle\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.046124 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-public-tls-certs\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.046693 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7257c07-0ab1-4030-9c40-c942d0de78f9-internal-tls-certs\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.046918 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-config-data\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.047187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533504d0-9d95-4bbd-8c8e-10696cd115a6-public-tls-certs\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.073347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcqvg\" (UniqueName: \"kubernetes.io/projected/533504d0-9d95-4bbd-8c8e-10696cd115a6-kube-api-access-xcqvg\") pod \"heat-cfnapi-66f8dcd8d5-cmr9n\" (UID: \"533504d0-9d95-4bbd-8c8e-10696cd115a6\") " pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.088245 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqt6\" (UniqueName: \"kubernetes.io/projected/f7257c07-0ab1-4030-9c40-c942d0de78f9-kube-api-access-njqt6\") pod \"heat-api-779fc6f99c-lz6wq\" (UID: \"f7257c07-0ab1-4030-9c40-c942d0de78f9\") " pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.114233 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.140319 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:30 crc kubenswrapper[4982]: W0224 15:18:30.787764 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea93c40b_bf1e_433e_8782_fcb3781962a7.slice/crio-927b064346adcf3eba2ecade3fc4c5810c073a3bacb801c5c116ff9be597a51c WatchSource:0}: Error finding container 927b064346adcf3eba2ecade3fc4c5810c073a3bacb801c5c116ff9be597a51c: Status 404 returned error can't find the container with id 927b064346adcf3eba2ecade3fc4c5810c073a3bacb801c5c116ff9be597a51c Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.790934 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-847b6bf986-6knzb"] Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.815610 4982 generic.go:334] "Generic (PLEG): container finished" podID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" containerID="e5769082a9822d28a8b6a4212c52676493c55ab640ee1a45baf0dc35abc99bcd" exitCode=0 Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.815727 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" event={"ID":"501451ff-48f4-4ce7-afa6-50220d7f9ab7","Type":"ContainerDied","Data":"e5769082a9822d28a8b6a4212c52676493c55ab640ee1a45baf0dc35abc99bcd"} Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.823896 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9dd449a-d430-42cf-8d1a-492c750fde59","Type":"ContainerStarted","Data":"b1c12f5b3f44cebd1f12e85a6c402742da457ecd3caa146d881081aead74ec97"} Feb 24 15:18:30 crc kubenswrapper[4982]: I0224 15:18:30.858465 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.452460439 podStartE2EDuration="39.858440476s" podCreationTimestamp="2026-02-24 15:17:51 +0000 UTC" firstStartedPulling="2026-02-24 15:17:52.994847157 +0000 UTC m=+1734.613905660" lastFinishedPulling="2026-02-24 15:18:29.400827204 +0000 UTC m=+1771.019885697" observedRunningTime="2026-02-24 15:18:30.849942986 +0000 UTC m=+1772.469001479" watchObservedRunningTime="2026-02-24 15:18:30.858440476 +0000 UTC m=+1772.477498969" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.202120 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-779fc6f99c-lz6wq"] Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.239257 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66f8dcd8d5-cmr9n"] Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.339129 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.449534 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-nb\") pod \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.449786 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-config\") pod \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.449839 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-sb\") pod \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.449897 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szsxg\" (UniqueName: \"kubernetes.io/projected/501451ff-48f4-4ce7-afa6-50220d7f9ab7-kube-api-access-szsxg\") pod \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.450001 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-swift-storage-0\") pod \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.450099 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-svc\") pod \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.450132 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-openstack-edpm-ipam\") pod \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\" (UID: \"501451ff-48f4-4ce7-afa6-50220d7f9ab7\") " Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.458715 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501451ff-48f4-4ce7-afa6-50220d7f9ab7-kube-api-access-szsxg" (OuterVolumeSpecName: "kube-api-access-szsxg") pod "501451ff-48f4-4ce7-afa6-50220d7f9ab7" (UID: "501451ff-48f4-4ce7-afa6-50220d7f9ab7"). InnerVolumeSpecName "kube-api-access-szsxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.539099 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "501451ff-48f4-4ce7-afa6-50220d7f9ab7" (UID: "501451ff-48f4-4ce7-afa6-50220d7f9ab7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.553039 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szsxg\" (UniqueName: \"kubernetes.io/projected/501451ff-48f4-4ce7-afa6-50220d7f9ab7-kube-api-access-szsxg\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.553081 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.557420 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "501451ff-48f4-4ce7-afa6-50220d7f9ab7" (UID: "501451ff-48f4-4ce7-afa6-50220d7f9ab7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.565152 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "501451ff-48f4-4ce7-afa6-50220d7f9ab7" (UID: "501451ff-48f4-4ce7-afa6-50220d7f9ab7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.577007 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-config" (OuterVolumeSpecName: "config") pod "501451ff-48f4-4ce7-afa6-50220d7f9ab7" (UID: "501451ff-48f4-4ce7-afa6-50220d7f9ab7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.595393 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "501451ff-48f4-4ce7-afa6-50220d7f9ab7" (UID: "501451ff-48f4-4ce7-afa6-50220d7f9ab7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.608047 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "501451ff-48f4-4ce7-afa6-50220d7f9ab7" (UID: "501451ff-48f4-4ce7-afa6-50220d7f9ab7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.657556 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.657591 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-config\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.657600 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.657609 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.657618 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/501451ff-48f4-4ce7-afa6-50220d7f9ab7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.837565 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" event={"ID":"533504d0-9d95-4bbd-8c8e-10696cd115a6","Type":"ContainerStarted","Data":"5390294a72fa2b4c0357454379bd91073f62ea47bee4045d41281526038539a2"} Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.841889 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" event={"ID":"501451ff-48f4-4ce7-afa6-50220d7f9ab7","Type":"ContainerDied","Data":"6580ce3a458460e36e23a23636d27ec5963b1715ad7e34982d57e7df53f8840e"} Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.841929 4982 scope.go:117] "RemoveContainer" containerID="e5769082a9822d28a8b6a4212c52676493c55ab640ee1a45baf0dc35abc99bcd" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.842000 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-jsnm7" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.843815 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-779fc6f99c-lz6wq" event={"ID":"f7257c07-0ab1-4030-9c40-c942d0de78f9","Type":"ContainerStarted","Data":"dc661319943875f3e994deb94601ccdb450f72e37d8e0a628130df71d8616e84"} Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.851731 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-847b6bf986-6knzb" event={"ID":"ea93c40b-bf1e-433e-8782-fcb3781962a7","Type":"ContainerStarted","Data":"12b33254da038501a2a40610dde673879127f013f5a93fbc0635a52769589db5"} Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.851775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-847b6bf986-6knzb" event={"ID":"ea93c40b-bf1e-433e-8782-fcb3781962a7","Type":"ContainerStarted","Data":"927b064346adcf3eba2ecade3fc4c5810c073a3bacb801c5c116ff9be597a51c"} Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.851924 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.889720 4982 scope.go:117] "RemoveContainer" containerID="72908c03a426801de84b09e268bd5d5a335cbb98556af08df8c7b1a7f104d750" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.905212 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-847b6bf986-6knzb" podStartSLOduration=2.905192832 podStartE2EDuration="2.905192832s" podCreationTimestamp="2026-02-24 15:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:18:31.868319161 +0000 UTC m=+1773.487377674" watchObservedRunningTime="2026-02-24 15:18:31.905192832 +0000 UTC m=+1773.524251325" Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.943356 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-jsnm7"] Feb 24 15:18:31 crc kubenswrapper[4982]: I0224 15:18:31.970737 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-jsnm7"] Feb 24 15:18:33 crc kubenswrapper[4982]: I0224 15:18:33.168827 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" path="/var/lib/kubelet/pods/501451ff-48f4-4ce7-afa6-50220d7f9ab7/volumes" Feb 24 15:18:33 crc kubenswrapper[4982]: I0224 15:18:33.889833 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" event={"ID":"533504d0-9d95-4bbd-8c8e-10696cd115a6","Type":"ContainerStarted","Data":"202591769c543ba05fb731ce4fbf11d6d03ef9dbeea0132cf1a09d2f3bfd1712"} Feb 24 15:18:33 crc kubenswrapper[4982]: I0224 15:18:33.890132 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:33 crc kubenswrapper[4982]: I0224 15:18:33.897651 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-779fc6f99c-lz6wq" event={"ID":"f7257c07-0ab1-4030-9c40-c942d0de78f9","Type":"ContainerStarted","Data":"fb09180bc9f907bfdda8055abbe3af76f5662b6cc61e6ce4580bfb70f812df92"} Feb 24 15:18:33 crc kubenswrapper[4982]: I0224 15:18:33.897809 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:33 crc kubenswrapper[4982]: I0224 15:18:33.906336 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" podStartSLOduration=3.136883322 podStartE2EDuration="4.906320372s" podCreationTimestamp="2026-02-24 15:18:29 +0000 UTC" firstStartedPulling="2026-02-24 15:18:31.241594871 +0000 UTC m=+1772.860653364" lastFinishedPulling="2026-02-24 15:18:33.011031921 +0000 UTC m=+1774.630090414" observedRunningTime="2026-02-24 15:18:33.905215842 +0000 UTC m=+1775.524274345" watchObservedRunningTime="2026-02-24 15:18:33.906320372 +0000 UTC m=+1775.525378865" Feb 24 15:18:33 crc kubenswrapper[4982]: I0224 15:18:33.934304 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-779fc6f99c-lz6wq" podStartSLOduration=3.1308586 podStartE2EDuration="4.934282492s" podCreationTimestamp="2026-02-24 15:18:29 +0000 UTC" firstStartedPulling="2026-02-24 15:18:31.203376644 +0000 UTC m=+1772.822435147" lastFinishedPulling="2026-02-24 15:18:33.006800546 +0000 UTC m=+1774.625859039" observedRunningTime="2026-02-24 15:18:33.929959394 +0000 UTC m=+1775.549017897" watchObservedRunningTime="2026-02-24 15:18:33.934282492 +0000 UTC m=+1775.553341005" Feb 24 15:18:35 crc kubenswrapper[4982]: I0224 15:18:35.145926 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:18:35 crc kubenswrapper[4982]: E0224 15:18:35.146630 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.701735 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh"] Feb 24 15:18:39 crc kubenswrapper[4982]: E0224 15:18:39.702917 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" containerName="init" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.702935 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" containerName="init" Feb 24 15:18:39 crc kubenswrapper[4982]: E0224 15:18:39.702983 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" containerName="dnsmasq-dns" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.702995 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" containerName="dnsmasq-dns" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.703285 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="501451ff-48f4-4ce7-afa6-50220d7f9ab7" containerName="dnsmasq-dns" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.704240 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.708929 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.709064 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.709172 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.709355 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.721288 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh"] Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.764400 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.764764 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.765108 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgn5f\" (UniqueName: \"kubernetes.io/projected/f3a743f1-a8e5-4df7-a11d-60606242903e-kube-api-access-zgn5f\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.765159 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.867419 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.867621 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgn5f\" (UniqueName: \"kubernetes.io/projected/f3a743f1-a8e5-4df7-a11d-60606242903e-kube-api-access-zgn5f\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.867662 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.867734 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.873274 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.882167 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.882358 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:39 crc kubenswrapper[4982]: I0224 15:18:39.897520 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgn5f\" (UniqueName: \"kubernetes.io/projected/f3a743f1-a8e5-4df7-a11d-60606242903e-kube-api-access-zgn5f\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:40 crc kubenswrapper[4982]: I0224 15:18:40.034855 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:18:41 crc kubenswrapper[4982]: I0224 15:18:41.190139 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh"] Feb 24 15:18:41 crc kubenswrapper[4982]: I0224 15:18:41.862137 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-66f8dcd8d5-cmr9n" Feb 24 15:18:41 crc kubenswrapper[4982]: I0224 15:18:41.944190 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-ffddd7b8-rg7fc"] Feb 24 15:18:41 crc kubenswrapper[4982]: I0224 15:18:41.944458 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" podUID="b6da668a-ee13-4cb8-9a49-1efd4f88237e" containerName="heat-cfnapi" containerID="cri-o://001b717024bc38c2123396fec6ba7af1872551207affaeeb61687cee3d073709" gracePeriod=60 Feb 24 15:18:41 crc kubenswrapper[4982]: I0224 15:18:41.986748 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-779fc6f99c-lz6wq" Feb 24 15:18:41 crc kubenswrapper[4982]: I0224 15:18:41.995473 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" event={"ID":"f3a743f1-a8e5-4df7-a11d-60606242903e","Type":"ContainerStarted","Data":"d51a063d5d7a6a1c8390721277211053f2bad4500973862e5bdd7afc3eb8b0cf"} Feb 24 15:18:42 crc kubenswrapper[4982]: I0224 15:18:42.044940 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6864b648bd-6hrlz"] Feb 24 15:18:42 crc kubenswrapper[4982]: I0224 15:18:42.045154 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6864b648bd-6hrlz" podUID="2ee35452-c5a3-489a-8b5a-c2310d6547c1" containerName="heat-api" containerID="cri-o://e28140a5776da40ceadd26a27415308dcf53780fa6e5bc6dc34055aed1e45744" gracePeriod=60 Feb 24 15:18:42 crc kubenswrapper[4982]: I0224 15:18:42.536333 4982 scope.go:117] "RemoveContainer" containerID="921ca0896c665205ebccaaf525e38ee0f6bfdfee37f158caa1b580766b1cf3f9" Feb 24 15:18:42 crc kubenswrapper[4982]: I0224 15:18:42.599004 4982 scope.go:117] "RemoveContainer" containerID="d2ee803e090636e5d60646ffcebd0b4e2ad9c37cb50b19a6f504cd6974215a85" Feb 24 15:18:42 crc kubenswrapper[4982]: I0224 15:18:42.659405 4982 scope.go:117] "RemoveContainer" containerID="31c0c7d0d28c8daf9dd2d3094371c0258a38e09f507b0d296ee16fcdf9fab36a" Feb 24 15:18:42 crc kubenswrapper[4982]: I0224 15:18:42.714644 4982 scope.go:117] "RemoveContainer" containerID="e65a5b62b701be4a32fd5be5e8e37052a0d069f1465949ba316bc1b9eda32b7d" Feb 24 15:18:42 crc kubenswrapper[4982]: I0224 15:18:42.769760 4982 scope.go:117] "RemoveContainer" containerID="05e8453a295a1bab670a904c2395cbc6e654284e7234f8e28c21601d5612d66f" Feb 24 15:18:45 crc kubenswrapper[4982]: I0224 15:18:45.134906 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" podUID="b6da668a-ee13-4cb8-9a49-1efd4f88237e" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.224:8000/healthcheck\": read tcp 10.217.0.2:53182->10.217.0.224:8000: read: connection reset by peer" Feb 24 15:18:45 crc kubenswrapper[4982]: I0224 15:18:45.223399 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6864b648bd-6hrlz" podUID="2ee35452-c5a3-489a-8b5a-c2310d6547c1" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.225:8004/healthcheck\": read tcp 10.217.0.2:57956->10.217.0.225:8004: read: connection reset by peer" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.066158 4982 generic.go:334] "Generic (PLEG): container finished" podID="2ee35452-c5a3-489a-8b5a-c2310d6547c1" containerID="e28140a5776da40ceadd26a27415308dcf53780fa6e5bc6dc34055aed1e45744" exitCode=0 Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.066429 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6864b648bd-6hrlz" event={"ID":"2ee35452-c5a3-489a-8b5a-c2310d6547c1","Type":"ContainerDied","Data":"e28140a5776da40ceadd26a27415308dcf53780fa6e5bc6dc34055aed1e45744"} Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.069959 4982 generic.go:334] "Generic (PLEG): container finished" podID="b6da668a-ee13-4cb8-9a49-1efd4f88237e" containerID="001b717024bc38c2123396fec6ba7af1872551207affaeeb61687cee3d073709" exitCode=0 Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.069984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" event={"ID":"b6da668a-ee13-4cb8-9a49-1efd4f88237e","Type":"ContainerDied","Data":"001b717024bc38c2123396fec6ba7af1872551207affaeeb61687cee3d073709"} Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.278261 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.285065 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.453387 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data\") pod \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.453440 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-combined-ca-bundle\") pod \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.453575 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-public-tls-certs\") pod \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.453833 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-combined-ca-bundle\") pod \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.453876 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data-custom\") pod \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.454509 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data\") pod \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.454560 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-internal-tls-certs\") pod \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.454630 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-internal-tls-certs\") pod \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.454726 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7v6n\" (UniqueName: \"kubernetes.io/projected/b6da668a-ee13-4cb8-9a49-1efd4f88237e-kube-api-access-w7v6n\") pod \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.454755 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data-custom\") pod \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.454778 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-public-tls-certs\") pod \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\" (UID: \"b6da668a-ee13-4cb8-9a49-1efd4f88237e\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.454797 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hcr5\" (UniqueName: \"kubernetes.io/projected/2ee35452-c5a3-489a-8b5a-c2310d6547c1-kube-api-access-5hcr5\") pod \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\" (UID: \"2ee35452-c5a3-489a-8b5a-c2310d6547c1\") " Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.459540 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b6da668a-ee13-4cb8-9a49-1efd4f88237e" (UID: "b6da668a-ee13-4cb8-9a49-1efd4f88237e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.460593 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee35452-c5a3-489a-8b5a-c2310d6547c1-kube-api-access-5hcr5" (OuterVolumeSpecName: "kube-api-access-5hcr5") pod "2ee35452-c5a3-489a-8b5a-c2310d6547c1" (UID: "2ee35452-c5a3-489a-8b5a-c2310d6547c1"). InnerVolumeSpecName "kube-api-access-5hcr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.460857 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6da668a-ee13-4cb8-9a49-1efd4f88237e-kube-api-access-w7v6n" (OuterVolumeSpecName: "kube-api-access-w7v6n") pod "b6da668a-ee13-4cb8-9a49-1efd4f88237e" (UID: "b6da668a-ee13-4cb8-9a49-1efd4f88237e"). InnerVolumeSpecName "kube-api-access-w7v6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.461144 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2ee35452-c5a3-489a-8b5a-c2310d6547c1" (UID: "2ee35452-c5a3-489a-8b5a-c2310d6547c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.499431 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ee35452-c5a3-489a-8b5a-c2310d6547c1" (UID: "2ee35452-c5a3-489a-8b5a-c2310d6547c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.517300 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6da668a-ee13-4cb8-9a49-1efd4f88237e" (UID: "b6da668a-ee13-4cb8-9a49-1efd4f88237e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.533368 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b6da668a-ee13-4cb8-9a49-1efd4f88237e" (UID: "b6da668a-ee13-4cb8-9a49-1efd4f88237e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.537748 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2ee35452-c5a3-489a-8b5a-c2310d6547c1" (UID: "2ee35452-c5a3-489a-8b5a-c2310d6547c1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.540680 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data" (OuterVolumeSpecName: "config-data") pod "b6da668a-ee13-4cb8-9a49-1efd4f88237e" (UID: "b6da668a-ee13-4cb8-9a49-1efd4f88237e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.545951 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2ee35452-c5a3-489a-8b5a-c2310d6547c1" (UID: "2ee35452-c5a3-489a-8b5a-c2310d6547c1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.557884 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558087 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558180 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558250 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558324 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558396 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7v6n\" (UniqueName: \"kubernetes.io/projected/b6da668a-ee13-4cb8-9a49-1efd4f88237e-kube-api-access-w7v6n\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558475 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558585 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558677 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hcr5\" (UniqueName: \"kubernetes.io/projected/2ee35452-c5a3-489a-8b5a-c2310d6547c1-kube-api-access-5hcr5\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.558748 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.564181 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b6da668a-ee13-4cb8-9a49-1efd4f88237e" (UID: "b6da668a-ee13-4cb8-9a49-1efd4f88237e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.570283 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data" (OuterVolumeSpecName: "config-data") pod "2ee35452-c5a3-489a-8b5a-c2310d6547c1" (UID: "2ee35452-c5a3-489a-8b5a-c2310d6547c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.659949 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee35452-c5a3-489a-8b5a-c2310d6547c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:46 crc kubenswrapper[4982]: I0224 15:18:46.659988 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6da668a-ee13-4cb8-9a49-1efd4f88237e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.083347 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6864b648bd-6hrlz" event={"ID":"2ee35452-c5a3-489a-8b5a-c2310d6547c1","Type":"ContainerDied","Data":"8a94b6f1c8e0de05e79300757ed40364568797d1770fe03689717a455ed5d47d"} Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.083753 4982 scope.go:117] "RemoveContainer" containerID="e28140a5776da40ceadd26a27415308dcf53780fa6e5bc6dc34055aed1e45744" Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.083590 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6864b648bd-6hrlz" Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.089964 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" event={"ID":"b6da668a-ee13-4cb8-9a49-1efd4f88237e","Type":"ContainerDied","Data":"596aced197650c72d33f773184aa0e6a0383542e58864fac1a23fde822dddb0c"} Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.090019 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ffddd7b8-rg7fc" Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.125174 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6864b648bd-6hrlz"] Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.138323 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6864b648bd-6hrlz"] Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.162495 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee35452-c5a3-489a-8b5a-c2310d6547c1" path="/var/lib/kubelet/pods/2ee35452-c5a3-489a-8b5a-c2310d6547c1/volumes" Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.163780 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-ffddd7b8-rg7fc"] Feb 24 15:18:47 crc kubenswrapper[4982]: I0224 15:18:47.163818 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-ffddd7b8-rg7fc"] Feb 24 15:18:48 crc kubenswrapper[4982]: I0224 15:18:48.124701 4982 generic.go:334] "Generic (PLEG): container finished" podID="fc18fffb-2c78-4097-8145-143bf44b11dc" containerID="1a067886fb6c2aa45f12b41436f4ba313b2816141e2d36483fda5f84892198bf" exitCode=0 Feb 24 15:18:48 crc kubenswrapper[4982]: I0224 15:18:48.124770 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc18fffb-2c78-4097-8145-143bf44b11dc","Type":"ContainerDied","Data":"1a067886fb6c2aa45f12b41436f4ba313b2816141e2d36483fda5f84892198bf"} Feb 24 15:18:48 crc kubenswrapper[4982]: I0224 15:18:48.144805 4982 generic.go:334] "Generic (PLEG): container finished" podID="b68b4733-09b0-4fff-b032-f3339306a04d" containerID="1fdb8e994e6fc45f1d3a9d7bb1fb9729ab7f90114e27372e82d7ef0b76fc51f5" exitCode=0 Feb 24 15:18:48 crc kubenswrapper[4982]: I0224 15:18:48.145072 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b68b4733-09b0-4fff-b032-f3339306a04d","Type":"ContainerDied","Data":"1fdb8e994e6fc45f1d3a9d7bb1fb9729ab7f90114e27372e82d7ef0b76fc51f5"} Feb 24 15:18:48 crc kubenswrapper[4982]: I0224 15:18:48.147640 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:18:48 crc kubenswrapper[4982]: E0224 15:18:48.148110 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:18:49 crc kubenswrapper[4982]: I0224 15:18:49.172993 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6da668a-ee13-4cb8-9a49-1efd4f88237e" path="/var/lib/kubelet/pods/b6da668a-ee13-4cb8-9a49-1efd4f88237e/volumes" Feb 24 15:18:50 crc kubenswrapper[4982]: I0224 15:18:50.062703 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-847b6bf986-6knzb" Feb 24 15:18:50 crc kubenswrapper[4982]: I0224 15:18:50.123800 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6f466c7dbf-ltqbw"] Feb 24 15:18:50 crc kubenswrapper[4982]: I0224 15:18:50.123994 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6f466c7dbf-ltqbw" podUID="b855f1d1-f4f5-412f-a649-6adbdb13130d" containerName="heat-engine" containerID="cri-o://6f2ef479b7da24e3ac65c2caaedc0a0d3846feb6075ae4371c8a70a91a641664" gracePeriod=60 Feb 24 15:18:54 crc kubenswrapper[4982]: I0224 15:18:54.028520 4982 scope.go:117] "RemoveContainer" containerID="001b717024bc38c2123396fec6ba7af1872551207affaeeb61687cee3d073709" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.287570 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-zb79k"] Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.300906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" event={"ID":"f3a743f1-a8e5-4df7-a11d-60606242903e","Type":"ContainerStarted","Data":"064840fa416269797210fb0beb4632b513069bfa630044fe607740ca45dc4975"} Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.302077 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-zb79k"] Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.307938 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fc18fffb-2c78-4097-8145-143bf44b11dc","Type":"ContainerStarted","Data":"c1909a37e4b35611bc5d07203e61ed4918d9ace4482948f8786ed2e10789b146"} Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.309572 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.339729 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"b68b4733-09b0-4fff-b032-f3339306a04d","Type":"ContainerStarted","Data":"5c4a26b30d787479cf6f7e0c8ae2f6804cb67fee09d0cc1a6cbcfcddfca8e16a"} Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.341424 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.353354 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" podStartSLOduration=3.367721276 podStartE2EDuration="16.353334785s" podCreationTimestamp="2026-02-24 15:18:39 +0000 UTC" firstStartedPulling="2026-02-24 15:18:41.190699693 +0000 UTC m=+1782.809758186" lastFinishedPulling="2026-02-24 15:18:54.176313192 +0000 UTC m=+1795.795371695" observedRunningTime="2026-02-24 15:18:55.334902414 +0000 UTC m=+1796.953960957" watchObservedRunningTime="2026-02-24 15:18:55.353334785 +0000 UTC m=+1796.972393278" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.371050 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.371035125 podStartE2EDuration="52.371035125s" podCreationTimestamp="2026-02-24 15:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:18:55.369318168 +0000 UTC m=+1796.988376661" watchObservedRunningTime="2026-02-24 15:18:55.371035125 +0000 UTC m=+1796.990093618" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.417765 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=43.417715353 podStartE2EDuration="43.417715353s" podCreationTimestamp="2026-02-24 15:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:18:55.399824737 +0000 UTC m=+1797.018883230" watchObservedRunningTime="2026-02-24 15:18:55.417715353 +0000 UTC m=+1797.036773846" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.454179 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-7nhgx"] Feb 24 15:18:55 crc kubenswrapper[4982]: E0224 15:18:55.454878 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee35452-c5a3-489a-8b5a-c2310d6547c1" containerName="heat-api" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.454906 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee35452-c5a3-489a-8b5a-c2310d6547c1" containerName="heat-api" Feb 24 15:18:55 crc kubenswrapper[4982]: E0224 15:18:55.454927 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6da668a-ee13-4cb8-9a49-1efd4f88237e" containerName="heat-cfnapi" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.454936 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6da668a-ee13-4cb8-9a49-1efd4f88237e" containerName="heat-cfnapi" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.455217 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee35452-c5a3-489a-8b5a-c2310d6547c1" containerName="heat-api" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.455257 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6da668a-ee13-4cb8-9a49-1efd4f88237e" containerName="heat-cfnapi" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.456272 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.464614 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.469463 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-7nhgx"] Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.642827 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-scripts\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.642954 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-combined-ca-bundle\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.642991 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-config-data\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.643032 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69t8f\" (UniqueName: \"kubernetes.io/projected/5cf05096-32e4-4781-8604-33ec559bec9a-kube-api-access-69t8f\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.745113 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-combined-ca-bundle\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.745173 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-config-data\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.745205 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69t8f\" (UniqueName: \"kubernetes.io/projected/5cf05096-32e4-4781-8604-33ec559bec9a-kube-api-access-69t8f\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.745373 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-scripts\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.750576 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-scripts\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.753400 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-config-data\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.755027 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-combined-ca-bundle\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.771156 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69t8f\" (UniqueName: \"kubernetes.io/projected/5cf05096-32e4-4781-8604-33ec559bec9a-kube-api-access-69t8f\") pod \"aodh-db-sync-7nhgx\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:55 crc kubenswrapper[4982]: I0224 15:18:55.800062 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:18:56 crc kubenswrapper[4982]: I0224 15:18:56.452594 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-7nhgx"] Feb 24 15:18:57 crc kubenswrapper[4982]: I0224 15:18:57.157827 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f1556f-4320-4c99-9296-8526ded51204" path="/var/lib/kubelet/pods/e3f1556f-4320-4c99-9296-8526ded51204/volumes" Feb 24 15:18:57 crc kubenswrapper[4982]: I0224 15:18:57.376996 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-7nhgx" event={"ID":"5cf05096-32e4-4781-8604-33ec559bec9a","Type":"ContainerStarted","Data":"faad4da7f863ffb6e17655b70c5472268219a07db03755efdbd65d67bdde43b2"} Feb 24 15:18:59 crc kubenswrapper[4982]: E0224 15:18:59.911143 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f2ef479b7da24e3ac65c2caaedc0a0d3846feb6075ae4371c8a70a91a641664" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 24 15:18:59 crc kubenswrapper[4982]: E0224 15:18:59.913073 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f2ef479b7da24e3ac65c2caaedc0a0d3846feb6075ae4371c8a70a91a641664" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 24 15:18:59 crc kubenswrapper[4982]: E0224 15:18:59.914269 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f2ef479b7da24e3ac65c2caaedc0a0d3846feb6075ae4371c8a70a91a641664" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 24 15:18:59 crc kubenswrapper[4982]: E0224 15:18:59.914360 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6f466c7dbf-ltqbw" podUID="b855f1d1-f4f5-412f-a649-6adbdb13130d" containerName="heat-engine" Feb 24 15:19:01 crc kubenswrapper[4982]: I0224 15:19:01.145948 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:19:01 crc kubenswrapper[4982]: E0224 15:19:01.146690 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:19:02 crc kubenswrapper[4982]: I0224 15:19:02.697714 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 15:19:03 crc kubenswrapper[4982]: I0224 15:19:03.469235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-7nhgx" event={"ID":"5cf05096-32e4-4781-8604-33ec559bec9a","Type":"ContainerStarted","Data":"ad07d0da6ed74db672a59c9abc0971826c57268144cf04310c8816718776d557"} Feb 24 15:19:03 crc kubenswrapper[4982]: I0224 15:19:03.502934 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-7nhgx" podStartSLOduration=2.268651504 podStartE2EDuration="8.502907719s" podCreationTimestamp="2026-02-24 15:18:55 +0000 UTC" firstStartedPulling="2026-02-24 15:18:56.461215549 +0000 UTC m=+1798.080274042" lastFinishedPulling="2026-02-24 15:19:02.695471764 +0000 UTC m=+1804.314530257" observedRunningTime="2026-02-24 15:19:03.495974991 +0000 UTC m=+1805.115033474" watchObservedRunningTime="2026-02-24 15:19:03.502907719 +0000 UTC m=+1805.121966212" Feb 24 15:19:04 crc kubenswrapper[4982]: I0224 15:19:04.012563 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fc18fffb-2c78-4097-8145-143bf44b11dc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.21:5671: connect: connection refused" Feb 24 15:19:06 crc kubenswrapper[4982]: I0224 15:19:06.509627 4982 generic.go:334] "Generic (PLEG): container finished" podID="f3a743f1-a8e5-4df7-a11d-60606242903e" containerID="064840fa416269797210fb0beb4632b513069bfa630044fe607740ca45dc4975" exitCode=0 Feb 24 15:19:06 crc kubenswrapper[4982]: I0224 15:19:06.509993 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" event={"ID":"f3a743f1-a8e5-4df7-a11d-60606242903e","Type":"ContainerDied","Data":"064840fa416269797210fb0beb4632b513069bfa630044fe607740ca45dc4975"} Feb 24 15:19:06 crc kubenswrapper[4982]: I0224 15:19:06.513645 4982 generic.go:334] "Generic (PLEG): container finished" podID="5cf05096-32e4-4781-8604-33ec559bec9a" containerID="ad07d0da6ed74db672a59c9abc0971826c57268144cf04310c8816718776d557" exitCode=0 Feb 24 15:19:06 crc kubenswrapper[4982]: I0224 15:19:06.513686 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-7nhgx" event={"ID":"5cf05096-32e4-4781-8604-33ec559bec9a","Type":"ContainerDied","Data":"ad07d0da6ed74db672a59c9abc0971826c57268144cf04310c8816718776d557"} Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.174232 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.301341 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69t8f\" (UniqueName: \"kubernetes.io/projected/5cf05096-32e4-4781-8604-33ec559bec9a-kube-api-access-69t8f\") pod \"5cf05096-32e4-4781-8604-33ec559bec9a\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.301567 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-config-data\") pod \"5cf05096-32e4-4781-8604-33ec559bec9a\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.301699 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-combined-ca-bundle\") pod \"5cf05096-32e4-4781-8604-33ec559bec9a\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.301744 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-scripts\") pod \"5cf05096-32e4-4781-8604-33ec559bec9a\" (UID: \"5cf05096-32e4-4781-8604-33ec559bec9a\") " Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.307357 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf05096-32e4-4781-8604-33ec559bec9a-kube-api-access-69t8f" (OuterVolumeSpecName: "kube-api-access-69t8f") pod "5cf05096-32e4-4781-8604-33ec559bec9a" (UID: "5cf05096-32e4-4781-8604-33ec559bec9a"). InnerVolumeSpecName "kube-api-access-69t8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.307576 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-scripts" (OuterVolumeSpecName: "scripts") pod "5cf05096-32e4-4781-8604-33ec559bec9a" (UID: "5cf05096-32e4-4781-8604-33ec559bec9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.324089 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.346786 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-config-data" (OuterVolumeSpecName: "config-data") pod "5cf05096-32e4-4781-8604-33ec559bec9a" (UID: "5cf05096-32e4-4781-8604-33ec559bec9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.349703 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cf05096-32e4-4781-8604-33ec559bec9a" (UID: "5cf05096-32e4-4781-8604-33ec559bec9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.403722 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgn5f\" (UniqueName: \"kubernetes.io/projected/f3a743f1-a8e5-4df7-a11d-60606242903e-kube-api-access-zgn5f\") pod \"f3a743f1-a8e5-4df7-a11d-60606242903e\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.403806 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-inventory\") pod \"f3a743f1-a8e5-4df7-a11d-60606242903e\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.403957 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-repo-setup-combined-ca-bundle\") pod \"f3a743f1-a8e5-4df7-a11d-60606242903e\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.404195 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-ssh-key-openstack-edpm-ipam\") pod \"f3a743f1-a8e5-4df7-a11d-60606242903e\" (UID: \"f3a743f1-a8e5-4df7-a11d-60606242903e\") " Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.404940 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.404965 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.404980 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf05096-32e4-4781-8604-33ec559bec9a-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.404993 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69t8f\" (UniqueName: \"kubernetes.io/projected/5cf05096-32e4-4781-8604-33ec559bec9a-kube-api-access-69t8f\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.408241 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a743f1-a8e5-4df7-a11d-60606242903e-kube-api-access-zgn5f" (OuterVolumeSpecName: "kube-api-access-zgn5f") pod "f3a743f1-a8e5-4df7-a11d-60606242903e" (UID: "f3a743f1-a8e5-4df7-a11d-60606242903e"). InnerVolumeSpecName "kube-api-access-zgn5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.408959 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f3a743f1-a8e5-4df7-a11d-60606242903e" (UID: "f3a743f1-a8e5-4df7-a11d-60606242903e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.445634 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f3a743f1-a8e5-4df7-a11d-60606242903e" (UID: "f3a743f1-a8e5-4df7-a11d-60606242903e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.496150 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-inventory" (OuterVolumeSpecName: "inventory") pod "f3a743f1-a8e5-4df7-a11d-60606242903e" (UID: "f3a743f1-a8e5-4df7-a11d-60606242903e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.507634 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.507669 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgn5f\" (UniqueName: \"kubernetes.io/projected/f3a743f1-a8e5-4df7-a11d-60606242903e-kube-api-access-zgn5f\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.507684 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.507695 4982 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a743f1-a8e5-4df7-a11d-60606242903e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.541264 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" event={"ID":"f3a743f1-a8e5-4df7-a11d-60606242903e","Type":"ContainerDied","Data":"d51a063d5d7a6a1c8390721277211053f2bad4500973862e5bdd7afc3eb8b0cf"} Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.541327 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d51a063d5d7a6a1c8390721277211053f2bad4500973862e5bdd7afc3eb8b0cf" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.541411 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.553734 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-7nhgx" event={"ID":"5cf05096-32e4-4781-8604-33ec559bec9a","Type":"ContainerDied","Data":"faad4da7f863ffb6e17655b70c5472268219a07db03755efdbd65d67bdde43b2"} Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.553796 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faad4da7f863ffb6e17655b70c5472268219a07db03755efdbd65d67bdde43b2" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.553880 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-7nhgx" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.561012 4982 generic.go:334] "Generic (PLEG): container finished" podID="b855f1d1-f4f5-412f-a649-6adbdb13130d" containerID="6f2ef479b7da24e3ac65c2caaedc0a0d3846feb6075ae4371c8a70a91a641664" exitCode=0 Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.561052 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f466c7dbf-ltqbw" event={"ID":"b855f1d1-f4f5-412f-a649-6adbdb13130d","Type":"ContainerDied","Data":"6f2ef479b7da24e3ac65c2caaedc0a0d3846feb6075ae4371c8a70a91a641664"} Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.637584 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk"] Feb 24 15:19:08 crc kubenswrapper[4982]: E0224 15:19:08.638214 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf05096-32e4-4781-8604-33ec559bec9a" containerName="aodh-db-sync" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.638235 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf05096-32e4-4781-8604-33ec559bec9a" containerName="aodh-db-sync" Feb 24 15:19:08 crc kubenswrapper[4982]: E0224 15:19:08.638292 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a743f1-a8e5-4df7-a11d-60606242903e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.638302 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a743f1-a8e5-4df7-a11d-60606242903e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.638584 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a743f1-a8e5-4df7-a11d-60606242903e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.638621 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf05096-32e4-4781-8604-33ec559bec9a" containerName="aodh-db-sync" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.639605 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.645604 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.646086 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.646355 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.655917 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.682832 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk"] Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.710778 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.710867 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.710974 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmc4\" (UniqueName: \"kubernetes.io/projected/f0dc1e67-72c6-406f-ae09-eb8089da0840-kube-api-access-6pmc4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.813073 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.813155 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.813213 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmc4\" (UniqueName: \"kubernetes.io/projected/f0dc1e67-72c6-406f-ae09-eb8089da0840-kube-api-access-6pmc4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.820486 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.820675 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.830734 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmc4\" (UniqueName: \"kubernetes.io/projected/f0dc1e67-72c6-406f-ae09-eb8089da0840-kube-api-access-6pmc4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7g4bk\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.950923 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:19:08 crc kubenswrapper[4982]: I0224 15:19:08.975164 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.023464 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbmh\" (UniqueName: \"kubernetes.io/projected/b855f1d1-f4f5-412f-a649-6adbdb13130d-kube-api-access-qxbmh\") pod \"b855f1d1-f4f5-412f-a649-6adbdb13130d\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.024326 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data-custom\") pod \"b855f1d1-f4f5-412f-a649-6adbdb13130d\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.024470 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data\") pod \"b855f1d1-f4f5-412f-a649-6adbdb13130d\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.028038 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-combined-ca-bundle\") pod \"b855f1d1-f4f5-412f-a649-6adbdb13130d\" (UID: \"b855f1d1-f4f5-412f-a649-6adbdb13130d\") " Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.033757 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b855f1d1-f4f5-412f-a649-6adbdb13130d" (UID: "b855f1d1-f4f5-412f-a649-6adbdb13130d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.044224 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b855f1d1-f4f5-412f-a649-6adbdb13130d-kube-api-access-qxbmh" (OuterVolumeSpecName: "kube-api-access-qxbmh") pod "b855f1d1-f4f5-412f-a649-6adbdb13130d" (UID: "b855f1d1-f4f5-412f-a649-6adbdb13130d"). InnerVolumeSpecName "kube-api-access-qxbmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.046040 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.046070 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbmh\" (UniqueName: \"kubernetes.io/projected/b855f1d1-f4f5-412f-a649-6adbdb13130d-kube-api-access-qxbmh\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.083946 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b855f1d1-f4f5-412f-a649-6adbdb13130d" (UID: "b855f1d1-f4f5-412f-a649-6adbdb13130d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.093825 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data" (OuterVolumeSpecName: "config-data") pod "b855f1d1-f4f5-412f-a649-6adbdb13130d" (UID: "b855f1d1-f4f5-412f-a649-6adbdb13130d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.149642 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:09 crc kubenswrapper[4982]: I0224 15:19:09.149673 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b855f1d1-f4f5-412f-a649-6adbdb13130d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:09.575797 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6f466c7dbf-ltqbw" event={"ID":"b855f1d1-f4f5-412f-a649-6adbdb13130d","Type":"ContainerDied","Data":"25c2aa7d3c68e15b7a7d0831ae272edf467dfad5468da2cf61706abea7cfba15"} Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:09.576195 4982 scope.go:117] "RemoveContainer" containerID="6f2ef479b7da24e3ac65c2caaedc0a0d3846feb6075ae4371c8a70a91a641664" Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:09.576084 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6f466c7dbf-ltqbw" Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:09.608593 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk"] Feb 24 15:19:10 crc kubenswrapper[4982]: W0224 15:19:09.616982 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0dc1e67_72c6_406f_ae09_eb8089da0840.slice/crio-216f70d414dd3f321fc6c1ce0e1ebdb03f79725e9e8a5a85dfdd3319fb141910 WatchSource:0}: Error finding container 216f70d414dd3f321fc6c1ce0e1ebdb03f79725e9e8a5a85dfdd3319fb141910: Status 404 returned error can't find the container with id 216f70d414dd3f321fc6c1ce0e1ebdb03f79725e9e8a5a85dfdd3319fb141910 Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:09.620832 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6f466c7dbf-ltqbw"] Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:09.632635 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6f466c7dbf-ltqbw"] Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:10.309812 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:10.311479 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-listener" containerID="cri-o://a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7" gracePeriod=30 Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:10.311517 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-notifier" containerID="cri-o://c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e" gracePeriod=30 Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:10.311404 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-api" containerID="cri-o://7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19" gracePeriod=30 Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:10.311491 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-evaluator" containerID="cri-o://a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7" gracePeriod=30 Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:10.593598 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" event={"ID":"f0dc1e67-72c6-406f-ae09-eb8089da0840","Type":"ContainerStarted","Data":"720ec4a538176b5120d6374cc17a8b1382b00f63b7b32023e914df96c61ee82b"} Feb 24 15:19:10 crc kubenswrapper[4982]: I0224 15:19:10.593944 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" event={"ID":"f0dc1e67-72c6-406f-ae09-eb8089da0840","Type":"ContainerStarted","Data":"216f70d414dd3f321fc6c1ce0e1ebdb03f79725e9e8a5a85dfdd3319fb141910"} Feb 24 15:19:11 crc kubenswrapper[4982]: I0224 15:19:11.159451 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b855f1d1-f4f5-412f-a649-6adbdb13130d" path="/var/lib/kubelet/pods/b855f1d1-f4f5-412f-a649-6adbdb13130d/volumes" Feb 24 15:19:11 crc kubenswrapper[4982]: I0224 15:19:11.607331 4982 generic.go:334] "Generic (PLEG): container finished" podID="c6885b08-9767-422f-8833-6b09f9401bfd" containerID="a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7" exitCode=0 Feb 24 15:19:11 crc kubenswrapper[4982]: I0224 15:19:11.607687 4982 generic.go:334] "Generic (PLEG): container finished" podID="c6885b08-9767-422f-8833-6b09f9401bfd" containerID="7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19" exitCode=0 Feb 24 15:19:11 crc kubenswrapper[4982]: I0224 15:19:11.607420 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerDied","Data":"a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7"} Feb 24 15:19:11 crc kubenswrapper[4982]: I0224 15:19:11.607774 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerDied","Data":"7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19"} Feb 24 15:19:13 crc kubenswrapper[4982]: I0224 15:19:13.146337 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:19:13 crc kubenswrapper[4982]: E0224 15:19:13.147192 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:19:13 crc kubenswrapper[4982]: I0224 15:19:13.165768 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 24 15:19:13 crc kubenswrapper[4982]: I0224 15:19:13.209339 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" podStartSLOduration=4.750542712 podStartE2EDuration="5.20931631s" podCreationTimestamp="2026-02-24 15:19:08 +0000 UTC" firstStartedPulling="2026-02-24 15:19:09.626635362 +0000 UTC m=+1811.245693855" lastFinishedPulling="2026-02-24 15:19:10.08540896 +0000 UTC m=+1811.704467453" observedRunningTime="2026-02-24 15:19:11.632519702 +0000 UTC m=+1813.251578205" watchObservedRunningTime="2026-02-24 15:19:13.20931631 +0000 UTC m=+1814.828374813" Feb 24 15:19:13 crc kubenswrapper[4982]: I0224 15:19:13.235054 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:19:13 crc kubenswrapper[4982]: I0224 15:19:13.633732 4982 generic.go:334] "Generic (PLEG): container finished" podID="f0dc1e67-72c6-406f-ae09-eb8089da0840" containerID="720ec4a538176b5120d6374cc17a8b1382b00f63b7b32023e914df96c61ee82b" exitCode=0 Feb 24 15:19:13 crc kubenswrapper[4982]: I0224 15:19:13.633777 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" event={"ID":"f0dc1e67-72c6-406f-ae09-eb8089da0840","Type":"ContainerDied","Data":"720ec4a538176b5120d6374cc17a8b1382b00f63b7b32023e914df96c61ee82b"} Feb 24 15:19:14 crc kubenswrapper[4982]: I0224 15:19:14.010788 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 24 15:19:14 crc kubenswrapper[4982]: I0224 15:19:14.648389 4982 generic.go:334] "Generic (PLEG): container finished" podID="c6885b08-9767-422f-8833-6b09f9401bfd" containerID="a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7" exitCode=0 Feb 24 15:19:14 crc kubenswrapper[4982]: I0224 15:19:14.648474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerDied","Data":"a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7"} Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.233654 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.322921 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-inventory\") pod \"f0dc1e67-72c6-406f-ae09-eb8089da0840\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.323561 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-ssh-key-openstack-edpm-ipam\") pod \"f0dc1e67-72c6-406f-ae09-eb8089da0840\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.323594 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pmc4\" (UniqueName: \"kubernetes.io/projected/f0dc1e67-72c6-406f-ae09-eb8089da0840-kube-api-access-6pmc4\") pod \"f0dc1e67-72c6-406f-ae09-eb8089da0840\" (UID: \"f0dc1e67-72c6-406f-ae09-eb8089da0840\") " Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.336791 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0dc1e67-72c6-406f-ae09-eb8089da0840-kube-api-access-6pmc4" (OuterVolumeSpecName: "kube-api-access-6pmc4") pod "f0dc1e67-72c6-406f-ae09-eb8089da0840" (UID: "f0dc1e67-72c6-406f-ae09-eb8089da0840"). InnerVolumeSpecName "kube-api-access-6pmc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.357470 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f0dc1e67-72c6-406f-ae09-eb8089da0840" (UID: "f0dc1e67-72c6-406f-ae09-eb8089da0840"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.364603 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-inventory" (OuterVolumeSpecName: "inventory") pod "f0dc1e67-72c6-406f-ae09-eb8089da0840" (UID: "f0dc1e67-72c6-406f-ae09-eb8089da0840"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.426948 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.427122 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0dc1e67-72c6-406f-ae09-eb8089da0840-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.427196 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pmc4\" (UniqueName: \"kubernetes.io/projected/f0dc1e67-72c6-406f-ae09-eb8089da0840-kube-api-access-6pmc4\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.663207 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" event={"ID":"f0dc1e67-72c6-406f-ae09-eb8089da0840","Type":"ContainerDied","Data":"216f70d414dd3f321fc6c1ce0e1ebdb03f79725e9e8a5a85dfdd3319fb141910"} Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.663246 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="216f70d414dd3f321fc6c1ce0e1ebdb03f79725e9e8a5a85dfdd3319fb141910" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.664528 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7g4bk" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.811100 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc"] Feb 24 15:19:15 crc kubenswrapper[4982]: E0224 15:19:15.811764 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b855f1d1-f4f5-412f-a649-6adbdb13130d" containerName="heat-engine" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.811793 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b855f1d1-f4f5-412f-a649-6adbdb13130d" containerName="heat-engine" Feb 24 15:19:15 crc kubenswrapper[4982]: E0224 15:19:15.811831 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0dc1e67-72c6-406f-ae09-eb8089da0840" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.811842 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0dc1e67-72c6-406f-ae09-eb8089da0840" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.813120 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b855f1d1-f4f5-412f-a649-6adbdb13130d" containerName="heat-engine" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.813165 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0dc1e67-72c6-406f-ae09-eb8089da0840" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.814874 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.818158 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.818892 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.818996 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.819415 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.831097 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc"] Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.941520 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.941576 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.941610 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:15 crc kubenswrapper[4982]: I0224 15:19:15.941641 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxj98\" (UniqueName: \"kubernetes.io/projected/ff453eac-e860-4c72-9c3c-0aa80e0554d1-kube-api-access-cxj98\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.044389 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.044467 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.044520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.044545 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxj98\" (UniqueName: \"kubernetes.io/projected/ff453eac-e860-4c72-9c3c-0aa80e0554d1-kube-api-access-cxj98\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.048635 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.050434 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.055252 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.077968 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxj98\" (UniqueName: \"kubernetes.io/projected/ff453eac-e860-4c72-9c3c-0aa80e0554d1-kube-api-access-cxj98\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.131542 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:19:16 crc kubenswrapper[4982]: I0224 15:19:16.694402 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc"] Feb 24 15:19:16 crc kubenswrapper[4982]: W0224 15:19:16.706093 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff453eac_e860_4c72_9c3c_0aa80e0554d1.slice/crio-afb3d6a762774a4584116678e47f3a57321d36e6e621f0a9f7038c47337649dc WatchSource:0}: Error finding container afb3d6a762774a4584116678e47f3a57321d36e6e621f0a9f7038c47337649dc: Status 404 returned error can't find the container with id afb3d6a762774a4584116678e47f3a57321d36e6e621f0a9f7038c47337649dc Feb 24 15:19:17 crc kubenswrapper[4982]: I0224 15:19:17.595055 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="rabbitmq" containerID="cri-o://d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7" gracePeriod=604796 Feb 24 15:19:17 crc kubenswrapper[4982]: I0224 15:19:17.703157 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" event={"ID":"ff453eac-e860-4c72-9c3c-0aa80e0554d1","Type":"ContainerStarted","Data":"a5cb81c46f0b9b498364daff21f23403752349dbc1bff882beb9d717c612a12b"} Feb 24 15:19:17 crc kubenswrapper[4982]: I0224 15:19:17.703208 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" event={"ID":"ff453eac-e860-4c72-9c3c-0aa80e0554d1","Type":"ContainerStarted","Data":"afb3d6a762774a4584116678e47f3a57321d36e6e621f0a9f7038c47337649dc"} Feb 24 15:19:17 crc kubenswrapper[4982]: I0224 15:19:17.724845 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" podStartSLOduration=2.181909626 podStartE2EDuration="2.72482742s" podCreationTimestamp="2026-02-24 15:19:15 +0000 UTC" firstStartedPulling="2026-02-24 15:19:16.708601624 +0000 UTC m=+1818.327660137" lastFinishedPulling="2026-02-24 15:19:17.251519438 +0000 UTC m=+1818.870577931" observedRunningTime="2026-02-24 15:19:17.721488819 +0000 UTC m=+1819.340547322" watchObservedRunningTime="2026-02-24 15:19:17.72482742 +0000 UTC m=+1819.343885903" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.207134 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.350284 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-public-tls-certs\") pod \"c6885b08-9767-422f-8833-6b09f9401bfd\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.350443 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-combined-ca-bundle\") pod \"c6885b08-9767-422f-8833-6b09f9401bfd\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.350665 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-scripts\") pod \"c6885b08-9767-422f-8833-6b09f9401bfd\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.350747 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgf5w\" (UniqueName: \"kubernetes.io/projected/c6885b08-9767-422f-8833-6b09f9401bfd-kube-api-access-dgf5w\") pod \"c6885b08-9767-422f-8833-6b09f9401bfd\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.350804 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-config-data\") pod \"c6885b08-9767-422f-8833-6b09f9401bfd\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.350856 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-internal-tls-certs\") pod \"c6885b08-9767-422f-8833-6b09f9401bfd\" (UID: \"c6885b08-9767-422f-8833-6b09f9401bfd\") " Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.362051 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6885b08-9767-422f-8833-6b09f9401bfd-kube-api-access-dgf5w" (OuterVolumeSpecName: "kube-api-access-dgf5w") pod "c6885b08-9767-422f-8833-6b09f9401bfd" (UID: "c6885b08-9767-422f-8833-6b09f9401bfd"). InnerVolumeSpecName "kube-api-access-dgf5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.367647 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-scripts" (OuterVolumeSpecName: "scripts") pod "c6885b08-9767-422f-8833-6b09f9401bfd" (UID: "c6885b08-9767-422f-8833-6b09f9401bfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.425108 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c6885b08-9767-422f-8833-6b09f9401bfd" (UID: "c6885b08-9767-422f-8833-6b09f9401bfd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.444040 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c6885b08-9767-422f-8833-6b09f9401bfd" (UID: "c6885b08-9767-422f-8833-6b09f9401bfd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.455874 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.455904 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-scripts\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.455914 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgf5w\" (UniqueName: \"kubernetes.io/projected/c6885b08-9767-422f-8833-6b09f9401bfd-kube-api-access-dgf5w\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.455924 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.489393 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-config-data" (OuterVolumeSpecName: "config-data") pod "c6885b08-9767-422f-8833-6b09f9401bfd" (UID: "c6885b08-9767-422f-8833-6b09f9401bfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.505697 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6885b08-9767-422f-8833-6b09f9401bfd" (UID: "c6885b08-9767-422f-8833-6b09f9401bfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.561860 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.561927 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6885b08-9767-422f-8833-6b09f9401bfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.742705 4982 generic.go:334] "Generic (PLEG): container finished" podID="c6885b08-9767-422f-8833-6b09f9401bfd" containerID="c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e" exitCode=0 Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.742762 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.742791 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerDied","Data":"c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e"} Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.743221 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6885b08-9767-422f-8833-6b09f9401bfd","Type":"ContainerDied","Data":"75bee8d61b7b215d508d27661fd973f6b69561652c210bac4b4db336633583fb"} Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.743247 4982 scope.go:117] "RemoveContainer" containerID="a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.818880 4982 scope.go:117] "RemoveContainer" containerID="c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.848344 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.862566 4982 scope.go:117] "RemoveContainer" containerID="a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.901308 4982 scope.go:117] "RemoveContainer" containerID="7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.906453 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.917302 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 24 15:19:20 crc kubenswrapper[4982]: E0224 15:19:20.917767 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-evaluator" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.917789 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-evaluator" Feb 24 15:19:20 crc kubenswrapper[4982]: E0224 15:19:20.917806 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-listener" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.917813 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-listener" Feb 24 15:19:20 crc kubenswrapper[4982]: E0224 15:19:20.917821 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-notifier" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.917827 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-notifier" Feb 24 15:19:20 crc kubenswrapper[4982]: E0224 15:19:20.917849 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-api" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.917856 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-api" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.918110 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-evaluator" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.918137 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-api" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.918157 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-listener" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.918169 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" containerName="aodh-notifier" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.920297 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.922389 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.922587 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.923585 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.923833 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.924222 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-2qctw" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.925292 4982 scope.go:117] "RemoveContainer" containerID="a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7" Feb 24 15:19:20 crc kubenswrapper[4982]: E0224 15:19:20.929356 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7\": container with ID starting with a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7 not found: ID does not exist" containerID="a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.929397 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7"} err="failed to get container status \"a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7\": rpc error: code = NotFound desc = could not find container \"a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7\": container with ID starting with a9cc1a9b1c79c319249eeeb7b390b17ab3593cef274d178d6cf65f89ce486fb7 not found: ID does not exist" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.929448 4982 scope.go:117] "RemoveContainer" containerID="c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e" Feb 24 15:19:20 crc kubenswrapper[4982]: E0224 15:19:20.930570 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e\": container with ID starting with c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e not found: ID does not exist" containerID="c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.930621 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e"} err="failed to get container status \"c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e\": rpc error: code = NotFound desc = could not find container \"c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e\": container with ID starting with c9e6eeb1afa983c6e426b9f71603ba9826abb6f88c41f3197fe8407fb6c2d98e not found: ID does not exist" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.930652 4982 scope.go:117] "RemoveContainer" containerID="a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7" Feb 24 15:19:20 crc kubenswrapper[4982]: E0224 15:19:20.930950 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7\": container with ID starting with a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7 not found: ID does not exist" containerID="a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.930984 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7"} err="failed to get container status \"a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7\": rpc error: code = NotFound desc = could not find container \"a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7\": container with ID starting with a01a52944f3cc63f433a83101294f3d2616e917247acf3a11033b1e4846dfbd7 not found: ID does not exist" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.931004 4982 scope.go:117] "RemoveContainer" containerID="7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19" Feb 24 15:19:20 crc kubenswrapper[4982]: E0224 15:19:20.931800 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19\": container with ID starting with 7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19 not found: ID does not exist" containerID="7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.931845 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19"} err="failed to get container status \"7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19\": rpc error: code = NotFound desc = could not find container \"7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19\": container with ID starting with 7e65954637980914bbca85bec0e759953fcde1ae320ef0b4df4b9982f7eedf19 not found: ID does not exist" Feb 24 15:19:20 crc kubenswrapper[4982]: I0224 15:19:20.935561 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.081743 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-config-data\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.082008 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-public-tls-certs\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.082171 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-scripts\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.082254 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5znb\" (UniqueName: \"kubernetes.io/projected/7b46506f-5421-47ab-9ed9-2328c663adb8-kube-api-access-d5znb\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.082356 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-internal-tls-certs\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.082420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.160844 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6885b08-9767-422f-8833-6b09f9401bfd" path="/var/lib/kubelet/pods/c6885b08-9767-422f-8833-6b09f9401bfd/volumes" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.184969 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-config-data\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.185123 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-public-tls-certs\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.185191 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-scripts\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.185238 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5znb\" (UniqueName: \"kubernetes.io/projected/7b46506f-5421-47ab-9ed9-2328c663adb8-kube-api-access-d5znb\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.185292 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-internal-tls-certs\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.185340 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.189973 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-config-data\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.190162 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-internal-tls-certs\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.190287 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-public-tls-certs\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.192599 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.192889 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b46506f-5421-47ab-9ed9-2328c663adb8-scripts\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.203587 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5znb\" (UniqueName: \"kubernetes.io/projected/7b46506f-5421-47ab-9ed9-2328c663adb8-kube-api-access-d5znb\") pod \"aodh-0\" (UID: \"7b46506f-5421-47ab-9ed9-2328c663adb8\") " pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.254711 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 24 15:19:21 crc kubenswrapper[4982]: I0224 15:19:21.782209 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 24 15:19:22 crc kubenswrapper[4982]: I0224 15:19:22.771972 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7b46506f-5421-47ab-9ed9-2328c663adb8","Type":"ContainerStarted","Data":"e811ae2e8b9c4dc9523ef2a156bc50b4e411625198992a6c6f656fd3e7fb5d00"} Feb 24 15:19:22 crc kubenswrapper[4982]: I0224 15:19:22.772529 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7b46506f-5421-47ab-9ed9-2328c663adb8","Type":"ContainerStarted","Data":"db01a70ffe23413363576596f9043667942d990c219827c7e2b0d92e85576434"} Feb 24 15:19:22 crc kubenswrapper[4982]: I0224 15:19:22.940460 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Feb 24 15:19:23 crc kubenswrapper[4982]: I0224 15:19:23.792979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7b46506f-5421-47ab-9ed9-2328c663adb8","Type":"ContainerStarted","Data":"ec9837aeec0fd79a82a36a7a76b306b0d3167fb27579ff047f15df33a42f09ae"} Feb 24 15:19:24 crc kubenswrapper[4982]: E0224 15:19:24.038038 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e5d5770_3a37_45ef_99d4_f51cfb8e42b4.slice/crio-d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e5d5770_3a37_45ef_99d4_f51cfb8e42b4.slice/crio-conmon-d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.510180 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.668241 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-plugins\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.668485 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-plugins-conf\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.668556 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-pod-info\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669201 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669238 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669364 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-config-data\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669436 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-tls\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669494 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-erlang-cookie-secret\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669516 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669592 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-confd\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669629 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hglx\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-kube-api-access-6hglx\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669671 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-erlang-cookie\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.669689 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-server-conf\") pod \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\" (UID: \"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4\") " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.670259 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.670276 4982 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.674479 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-pod-info" (OuterVolumeSpecName: "pod-info") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.674941 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.677053 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.681678 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.681788 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-kube-api-access-6hglx" (OuterVolumeSpecName: "kube-api-access-6hglx") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "kube-api-access-6hglx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.705888 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-config-data" (OuterVolumeSpecName: "config-data") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.724028 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c" (OuterVolumeSpecName: "persistence") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "pvc-36925be9-36a2-46f7-828a-eaea93d5583c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.732671 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-server-conf" (OuterVolumeSpecName: "server-conf") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.772903 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.772941 4982 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.772951 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hglx\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-kube-api-access-6hglx\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.772962 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.772970 4982 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-server-conf\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.772978 4982 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-pod-info\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.773008 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") on node \"crc\" " Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.773019 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.803718 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" (UID: "2e5d5770-3a37-45ef-99d4-f51cfb8e42b4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.815942 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.816147 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-36925be9-36a2-46f7-828a-eaea93d5583c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c") on node "crc" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.830559 4982 generic.go:334] "Generic (PLEG): container finished" podID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerID="d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7" exitCode=0 Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.830643 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4","Type":"ContainerDied","Data":"d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7"} Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.830672 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"2e5d5770-3a37-45ef-99d4-f51cfb8e42b4","Type":"ContainerDied","Data":"83f842b007cb2fe779680453e09ac4cf62e43824ccab1ecd7dd7a4e4c4d42b20"} Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.830687 4982 scope.go:117] "RemoveContainer" containerID="d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.830810 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.837337 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7b46506f-5421-47ab-9ed9-2328c663adb8","Type":"ContainerStarted","Data":"71e6031756e13a5c227d0122914b4b95547360d98e69d57e27f0d9d0677cc79a"} Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.872107 4982 scope.go:117] "RemoveContainer" containerID="eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.875145 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.875177 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") on node \"crc\" DevicePath \"\"" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.879984 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.907326 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.919796 4982 scope.go:117] "RemoveContainer" containerID="d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.920222 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:19:24 crc kubenswrapper[4982]: E0224 15:19:24.920784 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="setup-container" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.920808 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="setup-container" Feb 24 15:19:24 crc kubenswrapper[4982]: E0224 15:19:24.920825 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="rabbitmq" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.920832 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="rabbitmq" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.921072 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" containerName="rabbitmq" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.922330 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 24 15:19:24 crc kubenswrapper[4982]: E0224 15:19:24.922611 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7\": container with ID starting with d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7 not found: ID does not exist" containerID="d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.922676 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7"} err="failed to get container status \"d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7\": rpc error: code = NotFound desc = could not find container \"d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7\": container with ID starting with d0ea0caffe0c3e800f2dde17f3a54e09a148071e47ff3ba6b85e015511d4c4d7 not found: ID does not exist" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.922710 4982 scope.go:117] "RemoveContainer" containerID="eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4" Feb 24 15:19:24 crc kubenswrapper[4982]: E0224 15:19:24.935578 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4\": container with ID starting with eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4 not found: ID does not exist" containerID="eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.935624 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4"} err="failed to get container status \"eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4\": rpc error: code = NotFound desc = could not find container \"eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4\": container with ID starting with eb8ad9ec151c615e4b9cfa83be963805a0cccbf371e5356772019b0435f0b9b4 not found: ID does not exist" Feb 24 15:19:24 crc kubenswrapper[4982]: I0224 15:19:24.963795 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079188 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079541 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079562 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079597 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079632 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-config-data\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079658 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/482245c3-03a8-4890-a48d-b234c5c78c3a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079717 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079741 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/482245c3-03a8-4890-a48d-b234c5c78c3a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079767 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079808 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.079828 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjs4\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-kube-api-access-5sjs4\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.168747 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5d5770-3a37-45ef-99d4-f51cfb8e42b4" path="/var/lib/kubelet/pods/2e5d5770-3a37-45ef-99d4-f51cfb8e42b4/volumes" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.181969 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182009 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182052 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182089 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-config-data\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182115 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/482245c3-03a8-4890-a48d-b234c5c78c3a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182174 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182193 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/482245c3-03a8-4890-a48d-b234c5c78c3a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182217 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182255 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182270 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sjs4\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-kube-api-access-5sjs4\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.182326 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.183187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.183219 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.185234 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-config-data\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.186065 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.186162 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/482245c3-03a8-4890-a48d-b234c5c78c3a-server-conf\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.188922 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.188977 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/622162fda3c187b77c125c9b656ad6fabd080bc128e80c097463db866b5ded2c/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.190079 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/482245c3-03a8-4890-a48d-b234c5c78c3a-pod-info\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.190682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/482245c3-03a8-4890-a48d-b234c5c78c3a-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.196173 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.200838 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.205614 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sjs4\" (UniqueName: \"kubernetes.io/projected/482245c3-03a8-4890-a48d-b234c5c78c3a-kube-api-access-5sjs4\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.264080 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36925be9-36a2-46f7-828a-eaea93d5583c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36925be9-36a2-46f7-828a-eaea93d5583c\") pod \"rabbitmq-server-1\" (UID: \"482245c3-03a8-4890-a48d-b234c5c78c3a\") " pod="openstack/rabbitmq-server-1" Feb 24 15:19:25 crc kubenswrapper[4982]: I0224 15:19:25.557113 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 24 15:19:26 crc kubenswrapper[4982]: I0224 15:19:26.394533 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 24 15:19:26 crc kubenswrapper[4982]: I0224 15:19:26.865004 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"482245c3-03a8-4890-a48d-b234c5c78c3a","Type":"ContainerStarted","Data":"22f79574e88b88824c90cac0f1161ae16bd7711e8912ae2192bb16afa5bba59b"} Feb 24 15:19:26 crc kubenswrapper[4982]: I0224 15:19:26.868557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7b46506f-5421-47ab-9ed9-2328c663adb8","Type":"ContainerStarted","Data":"780ec54f5059beae7679fa9f2605fc8f827b663b02a15e15b443aa340a825a5b"} Feb 24 15:19:26 crc kubenswrapper[4982]: I0224 15:19:26.914755 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.848099974 podStartE2EDuration="6.914723975s" podCreationTimestamp="2026-02-24 15:19:20 +0000 UTC" firstStartedPulling="2026-02-24 15:19:21.781559812 +0000 UTC m=+1823.400618305" lastFinishedPulling="2026-02-24 15:19:25.848183813 +0000 UTC m=+1827.467242306" observedRunningTime="2026-02-24 15:19:26.901800544 +0000 UTC m=+1828.520859107" watchObservedRunningTime="2026-02-24 15:19:26.914723975 +0000 UTC m=+1828.533782508" Feb 24 15:19:28 crc kubenswrapper[4982]: I0224 15:19:28.145479 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:19:28 crc kubenswrapper[4982]: E0224 15:19:28.146284 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:19:28 crc kubenswrapper[4982]: I0224 15:19:28.889827 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"482245c3-03a8-4890-a48d-b234c5c78c3a","Type":"ContainerStarted","Data":"31e8c29b21ccc933de9b5e0ecb5fa35bd59baa1c699e88acf792d5ccf16eb492"} Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.111774 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v6wzd"] Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.115962 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.131484 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6wzd"] Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.167948 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-catalog-content\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.168157 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-utilities\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.168224 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljg9x\" (UniqueName: \"kubernetes.io/projected/2da0cdff-8112-479a-adf4-c8fbb3280bb3-kube-api-access-ljg9x\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.271032 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-catalog-content\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.271428 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-utilities\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.271563 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljg9x\" (UniqueName: \"kubernetes.io/projected/2da0cdff-8112-479a-adf4-c8fbb3280bb3-kube-api-access-ljg9x\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.272016 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-utilities\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.272826 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-catalog-content\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.297443 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljg9x\" (UniqueName: \"kubernetes.io/projected/2da0cdff-8112-479a-adf4-c8fbb3280bb3-kube-api-access-ljg9x\") pod \"certified-operators-v6wzd\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:35 crc kubenswrapper[4982]: I0224 15:19:35.458967 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:36 crc kubenswrapper[4982]: I0224 15:19:36.020421 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6wzd"] Feb 24 15:19:37 crc kubenswrapper[4982]: I0224 15:19:37.041067 4982 generic.go:334] "Generic (PLEG): container finished" podID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerID="b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45" exitCode=0 Feb 24 15:19:37 crc kubenswrapper[4982]: I0224 15:19:37.041177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6wzd" event={"ID":"2da0cdff-8112-479a-adf4-c8fbb3280bb3","Type":"ContainerDied","Data":"b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45"} Feb 24 15:19:37 crc kubenswrapper[4982]: I0224 15:19:37.041416 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6wzd" event={"ID":"2da0cdff-8112-479a-adf4-c8fbb3280bb3","Type":"ContainerStarted","Data":"6bba9c2313dd082fa08907f98ebdf0986fa9dcd3de34a1c406edc5962ca6efc2"} Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.054883 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6wzd" event={"ID":"2da0cdff-8112-479a-adf4-c8fbb3280bb3","Type":"ContainerStarted","Data":"63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572"} Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.105602 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ln5h9"] Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.109471 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.121414 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ln5h9"] Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.256410 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-utilities\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.256717 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvzj\" (UniqueName: \"kubernetes.io/projected/acced837-267d-459e-966d-e479ff357561-kube-api-access-5kvzj\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.256835 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-catalog-content\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.358729 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvzj\" (UniqueName: \"kubernetes.io/projected/acced837-267d-459e-966d-e479ff357561-kube-api-access-5kvzj\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.358860 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-catalog-content\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.358969 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-utilities\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.359301 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-catalog-content\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.359407 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-utilities\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.376086 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvzj\" (UniqueName: \"kubernetes.io/projected/acced837-267d-459e-966d-e479ff357561-kube-api-access-5kvzj\") pod \"redhat-operators-ln5h9\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.431933 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:38 crc kubenswrapper[4982]: I0224 15:19:38.944034 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ln5h9"] Feb 24 15:19:39 crc kubenswrapper[4982]: I0224 15:19:39.067005 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln5h9" event={"ID":"acced837-267d-459e-966d-e479ff357561","Type":"ContainerStarted","Data":"8a3cf2780daeb16f577ca4e4cc70be8011574cc77e693ef382c2477f64f6c0b6"} Feb 24 15:19:39 crc kubenswrapper[4982]: I0224 15:19:39.158679 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:19:39 crc kubenswrapper[4982]: E0224 15:19:39.159150 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:19:40 crc kubenswrapper[4982]: I0224 15:19:40.084408 4982 generic.go:334] "Generic (PLEG): container finished" podID="acced837-267d-459e-966d-e479ff357561" containerID="ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85" exitCode=0 Feb 24 15:19:40 crc kubenswrapper[4982]: I0224 15:19:40.084493 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln5h9" event={"ID":"acced837-267d-459e-966d-e479ff357561","Type":"ContainerDied","Data":"ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85"} Feb 24 15:19:41 crc kubenswrapper[4982]: I0224 15:19:41.113253 4982 generic.go:334] "Generic (PLEG): container finished" podID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerID="63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572" exitCode=0 Feb 24 15:19:41 crc kubenswrapper[4982]: I0224 15:19:41.113322 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6wzd" event={"ID":"2da0cdff-8112-479a-adf4-c8fbb3280bb3","Type":"ContainerDied","Data":"63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572"} Feb 24 15:19:41 crc kubenswrapper[4982]: I0224 15:19:41.120256 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln5h9" event={"ID":"acced837-267d-459e-966d-e479ff357561","Type":"ContainerStarted","Data":"18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177"} Feb 24 15:19:43 crc kubenswrapper[4982]: I0224 15:19:43.045146 4982 scope.go:117] "RemoveContainer" containerID="8c0d91fb2cd8cfdb7af6bf9b0605eb2bde763475194cc60e01970098f7579267" Feb 24 15:19:43 crc kubenswrapper[4982]: I0224 15:19:43.074444 4982 scope.go:117] "RemoveContainer" containerID="f8a85a3dffdee28f1c54f7b2ee02e614c4ac7969ab7076b7516c8a1f18cb6410" Feb 24 15:19:43 crc kubenswrapper[4982]: I0224 15:19:43.159021 4982 scope.go:117] "RemoveContainer" containerID="d9485b817c09e03b97c9b208ac5ca680f9578bd551a39647700aa5e5e4baa3d1" Feb 24 15:19:43 crc kubenswrapper[4982]: I0224 15:19:43.164740 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6wzd" event={"ID":"2da0cdff-8112-479a-adf4-c8fbb3280bb3","Type":"ContainerStarted","Data":"55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411"} Feb 24 15:19:43 crc kubenswrapper[4982]: I0224 15:19:43.195372 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v6wzd" podStartSLOduration=2.662440304 podStartE2EDuration="8.195339292s" podCreationTimestamp="2026-02-24 15:19:35 +0000 UTC" firstStartedPulling="2026-02-24 15:19:37.044671928 +0000 UTC m=+1838.663730431" lastFinishedPulling="2026-02-24 15:19:42.577570896 +0000 UTC m=+1844.196629419" observedRunningTime="2026-02-24 15:19:43.182700948 +0000 UTC m=+1844.801759441" watchObservedRunningTime="2026-02-24 15:19:43.195339292 +0000 UTC m=+1844.814397785" Feb 24 15:19:43 crc kubenswrapper[4982]: I0224 15:19:43.211842 4982 scope.go:117] "RemoveContainer" containerID="1ebc174c4aa18925ef9c2a537cf33f2a9e4b83a088d8f92794fd4fe7ea09f349" Feb 24 15:19:43 crc kubenswrapper[4982]: I0224 15:19:43.240255 4982 scope.go:117] "RemoveContainer" containerID="280f2d154ff9dc4c2e21a24df558fcfa39d02ad9fedb5bf9d465f5514bdfc7cf" Feb 24 15:19:43 crc kubenswrapper[4982]: I0224 15:19:43.269868 4982 scope.go:117] "RemoveContainer" containerID="a49cceabac4005ea9fcba0f3ac98dee484269c7940d08c44e9fbc6c1e2495985" Feb 24 15:19:45 crc kubenswrapper[4982]: I0224 15:19:45.459715 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:45 crc kubenswrapper[4982]: I0224 15:19:45.460264 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:19:46 crc kubenswrapper[4982]: I0224 15:19:46.513440 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v6wzd" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="registry-server" probeResult="failure" output=< Feb 24 15:19:46 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:19:46 crc kubenswrapper[4982]: > Feb 24 15:19:48 crc kubenswrapper[4982]: I0224 15:19:48.225669 4982 generic.go:334] "Generic (PLEG): container finished" podID="acced837-267d-459e-966d-e479ff357561" containerID="18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177" exitCode=0 Feb 24 15:19:48 crc kubenswrapper[4982]: I0224 15:19:48.225708 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln5h9" event={"ID":"acced837-267d-459e-966d-e479ff357561","Type":"ContainerDied","Data":"18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177"} Feb 24 15:19:49 crc kubenswrapper[4982]: I0224 15:19:49.242816 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln5h9" event={"ID":"acced837-267d-459e-966d-e479ff357561","Type":"ContainerStarted","Data":"78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c"} Feb 24 15:19:49 crc kubenswrapper[4982]: I0224 15:19:49.273405 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ln5h9" podStartSLOduration=2.735133165 podStartE2EDuration="11.273386064s" podCreationTimestamp="2026-02-24 15:19:38 +0000 UTC" firstStartedPulling="2026-02-24 15:19:40.087118097 +0000 UTC m=+1841.706176600" lastFinishedPulling="2026-02-24 15:19:48.625371006 +0000 UTC m=+1850.244429499" observedRunningTime="2026-02-24 15:19:49.262017445 +0000 UTC m=+1850.881075948" watchObservedRunningTime="2026-02-24 15:19:49.273386064 +0000 UTC m=+1850.892444557" Feb 24 15:19:54 crc kubenswrapper[4982]: I0224 15:19:54.146615 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:19:54 crc kubenswrapper[4982]: E0224 15:19:54.147168 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:19:56 crc kubenswrapper[4982]: I0224 15:19:56.529605 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v6wzd" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="registry-server" probeResult="failure" output=< Feb 24 15:19:56 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:19:56 crc kubenswrapper[4982]: > Feb 24 15:19:58 crc kubenswrapper[4982]: I0224 15:19:58.432166 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:58 crc kubenswrapper[4982]: I0224 15:19:58.432540 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:19:59 crc kubenswrapper[4982]: I0224 15:19:59.482771 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ln5h9" podUID="acced837-267d-459e-966d-e479ff357561" containerName="registry-server" probeResult="failure" output=< Feb 24 15:19:59 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:19:59 crc kubenswrapper[4982]: > Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.148320 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532440-qdpb4"] Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.150167 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.153725 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.153883 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.153880 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.164982 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532440-qdpb4"] Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.313458 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4sz\" (UniqueName: \"kubernetes.io/projected/5837cfa1-d5c4-468c-b2a9-75042b7ad1a8-kube-api-access-sz4sz\") pod \"auto-csr-approver-29532440-qdpb4\" (UID: \"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8\") " pod="openshift-infra/auto-csr-approver-29532440-qdpb4" Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.415823 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4sz\" (UniqueName: \"kubernetes.io/projected/5837cfa1-d5c4-468c-b2a9-75042b7ad1a8-kube-api-access-sz4sz\") pod \"auto-csr-approver-29532440-qdpb4\" (UID: \"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8\") " pod="openshift-infra/auto-csr-approver-29532440-qdpb4" Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.435721 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4sz\" (UniqueName: \"kubernetes.io/projected/5837cfa1-d5c4-468c-b2a9-75042b7ad1a8-kube-api-access-sz4sz\") pod \"auto-csr-approver-29532440-qdpb4\" (UID: \"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8\") " pod="openshift-infra/auto-csr-approver-29532440-qdpb4" Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.468429 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" Feb 24 15:20:00 crc kubenswrapper[4982]: I0224 15:20:00.984557 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532440-qdpb4"] Feb 24 15:20:01 crc kubenswrapper[4982]: I0224 15:20:01.394141 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" event={"ID":"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8","Type":"ContainerStarted","Data":"3b0e39dbaf48d62bfc359b2eddd65009eb882728c67d669de5d9594dc76a1f23"} Feb 24 15:20:01 crc kubenswrapper[4982]: I0224 15:20:01.398274 4982 generic.go:334] "Generic (PLEG): container finished" podID="482245c3-03a8-4890-a48d-b234c5c78c3a" containerID="31e8c29b21ccc933de9b5e0ecb5fa35bd59baa1c699e88acf792d5ccf16eb492" exitCode=0 Feb 24 15:20:01 crc kubenswrapper[4982]: I0224 15:20:01.398326 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"482245c3-03a8-4890-a48d-b234c5c78c3a","Type":"ContainerDied","Data":"31e8c29b21ccc933de9b5e0ecb5fa35bd59baa1c699e88acf792d5ccf16eb492"} Feb 24 15:20:02 crc kubenswrapper[4982]: I0224 15:20:02.412968 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" event={"ID":"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8","Type":"ContainerStarted","Data":"e94abd1a8c937c12b261a84c41223912ce51b68b1c0527f5219c9b09330b801e"} Feb 24 15:20:02 crc kubenswrapper[4982]: I0224 15:20:02.415517 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"482245c3-03a8-4890-a48d-b234c5c78c3a","Type":"ContainerStarted","Data":"8d7ece6dcafba3c2ef4be8916323f1636b043f51f50fb70d95abc86384befb41"} Feb 24 15:20:02 crc kubenswrapper[4982]: I0224 15:20:02.415680 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 24 15:20:02 crc kubenswrapper[4982]: I0224 15:20:02.430482 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" podStartSLOduration=1.487619485 podStartE2EDuration="2.430465199s" podCreationTimestamp="2026-02-24 15:20:00 +0000 UTC" firstStartedPulling="2026-02-24 15:20:00.995759229 +0000 UTC m=+1862.614817722" lastFinishedPulling="2026-02-24 15:20:01.938604933 +0000 UTC m=+1863.557663436" observedRunningTime="2026-02-24 15:20:02.424452276 +0000 UTC m=+1864.043510779" watchObservedRunningTime="2026-02-24 15:20:02.430465199 +0000 UTC m=+1864.049523692" Feb 24 15:20:02 crc kubenswrapper[4982]: I0224 15:20:02.454759 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.454742348 podStartE2EDuration="38.454742348s" podCreationTimestamp="2026-02-24 15:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:20:02.447874722 +0000 UTC m=+1864.066933215" watchObservedRunningTime="2026-02-24 15:20:02.454742348 +0000 UTC m=+1864.073800841" Feb 24 15:20:03 crc kubenswrapper[4982]: I0224 15:20:03.427691 4982 generic.go:334] "Generic (PLEG): container finished" podID="5837cfa1-d5c4-468c-b2a9-75042b7ad1a8" containerID="e94abd1a8c937c12b261a84c41223912ce51b68b1c0527f5219c9b09330b801e" exitCode=0 Feb 24 15:20:03 crc kubenswrapper[4982]: I0224 15:20:03.427882 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" event={"ID":"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8","Type":"ContainerDied","Data":"e94abd1a8c937c12b261a84c41223912ce51b68b1c0527f5219c9b09330b801e"} Feb 24 15:20:04 crc kubenswrapper[4982]: I0224 15:20:04.974679 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.028617 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz4sz\" (UniqueName: \"kubernetes.io/projected/5837cfa1-d5c4-468c-b2a9-75042b7ad1a8-kube-api-access-sz4sz\") pod \"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8\" (UID: \"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8\") " Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.054415 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5837cfa1-d5c4-468c-b2a9-75042b7ad1a8-kube-api-access-sz4sz" (OuterVolumeSpecName: "kube-api-access-sz4sz") pod "5837cfa1-d5c4-468c-b2a9-75042b7ad1a8" (UID: "5837cfa1-d5c4-468c-b2a9-75042b7ad1a8"). InnerVolumeSpecName "kube-api-access-sz4sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.132246 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz4sz\" (UniqueName: \"kubernetes.io/projected/5837cfa1-d5c4-468c-b2a9-75042b7ad1a8-kube-api-access-sz4sz\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.453484 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" event={"ID":"5837cfa1-d5c4-468c-b2a9-75042b7ad1a8","Type":"ContainerDied","Data":"3b0e39dbaf48d62bfc359b2eddd65009eb882728c67d669de5d9594dc76a1f23"} Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.453546 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b0e39dbaf48d62bfc359b2eddd65009eb882728c67d669de5d9594dc76a1f23" Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.453556 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532440-qdpb4" Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.499979 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532434-bt9vj"] Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.511273 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532434-bt9vj"] Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.517039 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:20:05 crc kubenswrapper[4982]: I0224 15:20:05.566322 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:20:06 crc kubenswrapper[4982]: I0224 15:20:06.147247 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:20:06 crc kubenswrapper[4982]: E0224 15:20:06.147599 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:20:06 crc kubenswrapper[4982]: I0224 15:20:06.306942 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v6wzd"] Feb 24 15:20:07 crc kubenswrapper[4982]: I0224 15:20:07.159713 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f6a4ae-f033-46bd-8f3b-5c625216a825" path="/var/lib/kubelet/pods/a9f6a4ae-f033-46bd-8f3b-5c625216a825/volumes" Feb 24 15:20:07 crc kubenswrapper[4982]: I0224 15:20:07.478261 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v6wzd" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="registry-server" containerID="cri-o://55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411" gracePeriod=2 Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.144952 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.244165 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-catalog-content\") pod \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.245016 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljg9x\" (UniqueName: \"kubernetes.io/projected/2da0cdff-8112-479a-adf4-c8fbb3280bb3-kube-api-access-ljg9x\") pod \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.245097 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-utilities\") pod \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\" (UID: \"2da0cdff-8112-479a-adf4-c8fbb3280bb3\") " Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.254523 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-utilities" (OuterVolumeSpecName: "utilities") pod "2da0cdff-8112-479a-adf4-c8fbb3280bb3" (UID: "2da0cdff-8112-479a-adf4-c8fbb3280bb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.270789 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da0cdff-8112-479a-adf4-c8fbb3280bb3-kube-api-access-ljg9x" (OuterVolumeSpecName: "kube-api-access-ljg9x") pod "2da0cdff-8112-479a-adf4-c8fbb3280bb3" (UID: "2da0cdff-8112-479a-adf4-c8fbb3280bb3"). InnerVolumeSpecName "kube-api-access-ljg9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.335297 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2da0cdff-8112-479a-adf4-c8fbb3280bb3" (UID: "2da0cdff-8112-479a-adf4-c8fbb3280bb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.348218 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljg9x\" (UniqueName: \"kubernetes.io/projected/2da0cdff-8112-479a-adf4-c8fbb3280bb3-kube-api-access-ljg9x\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.348250 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.348260 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da0cdff-8112-479a-adf4-c8fbb3280bb3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.491974 4982 generic.go:334] "Generic (PLEG): container finished" podID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerID="55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411" exitCode=0 Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.492289 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6wzd" event={"ID":"2da0cdff-8112-479a-adf4-c8fbb3280bb3","Type":"ContainerDied","Data":"55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411"} Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.492435 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6wzd" event={"ID":"2da0cdff-8112-479a-adf4-c8fbb3280bb3","Type":"ContainerDied","Data":"6bba9c2313dd082fa08907f98ebdf0986fa9dcd3de34a1c406edc5962ca6efc2"} Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.492363 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6wzd" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.492561 4982 scope.go:117] "RemoveContainer" containerID="55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.519073 4982 scope.go:117] "RemoveContainer" containerID="63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.546234 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v6wzd"] Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.550805 4982 scope.go:117] "RemoveContainer" containerID="b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.562000 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v6wzd"] Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.600587 4982 scope.go:117] "RemoveContainer" containerID="55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411" Feb 24 15:20:08 crc kubenswrapper[4982]: E0224 15:20:08.601215 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411\": container with ID starting with 55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411 not found: ID does not exist" containerID="55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.601245 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411"} err="failed to get container status \"55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411\": rpc error: code = NotFound desc = could not find container \"55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411\": container with ID starting with 55c565d8066f46b704c77d6ae1026418b9f1d46d96d7186793aba9d24982b411 not found: ID does not exist" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.601279 4982 scope.go:117] "RemoveContainer" containerID="63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572" Feb 24 15:20:08 crc kubenswrapper[4982]: E0224 15:20:08.601677 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572\": container with ID starting with 63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572 not found: ID does not exist" containerID="63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.601925 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572"} err="failed to get container status \"63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572\": rpc error: code = NotFound desc = could not find container \"63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572\": container with ID starting with 63e2f00548a2855b819b88118b86a18799e896d26336b139044b4385d2b6a572 not found: ID does not exist" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.601960 4982 scope.go:117] "RemoveContainer" containerID="b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45" Feb 24 15:20:08 crc kubenswrapper[4982]: E0224 15:20:08.602877 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45\": container with ID starting with b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45 not found: ID does not exist" containerID="b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45" Feb 24 15:20:08 crc kubenswrapper[4982]: I0224 15:20:08.602902 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45"} err="failed to get container status \"b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45\": rpc error: code = NotFound desc = could not find container \"b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45\": container with ID starting with b7862439c1d267cf50944e8613d984d9ceebbebda902c305a410cd6500cdcb45 not found: ID does not exist" Feb 24 15:20:09 crc kubenswrapper[4982]: I0224 15:20:09.177256 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" path="/var/lib/kubelet/pods/2da0cdff-8112-479a-adf4-c8fbb3280bb3/volumes" Feb 24 15:20:09 crc kubenswrapper[4982]: I0224 15:20:09.500200 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ln5h9" podUID="acced837-267d-459e-966d-e479ff357561" containerName="registry-server" probeResult="failure" output=< Feb 24 15:20:09 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:20:09 crc kubenswrapper[4982]: > Feb 24 15:20:15 crc kubenswrapper[4982]: I0224 15:20:15.560746 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 24 15:20:15 crc kubenswrapper[4982]: I0224 15:20:15.663718 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:20:19 crc kubenswrapper[4982]: I0224 15:20:19.160853 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:20:19 crc kubenswrapper[4982]: E0224 15:20:19.161559 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:20:19 crc kubenswrapper[4982]: I0224 15:20:19.483304 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ln5h9" podUID="acced837-267d-459e-966d-e479ff357561" containerName="registry-server" probeResult="failure" output=< Feb 24 15:20:19 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:20:19 crc kubenswrapper[4982]: > Feb 24 15:20:19 crc kubenswrapper[4982]: I0224 15:20:19.916763 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" containerName="rabbitmq" containerID="cri-o://f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53" gracePeriod=604796 Feb 24 15:20:22 crc kubenswrapper[4982]: I0224 15:20:22.561945 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.675782 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.704460 4982 generic.go:334] "Generic (PLEG): container finished" podID="511c8aa0-4327-455c-8caa-66bc442d199f" containerID="f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53" exitCode=0 Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.704528 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"511c8aa0-4327-455c-8caa-66bc442d199f","Type":"ContainerDied","Data":"f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53"} Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.704557 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.704854 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"511c8aa0-4327-455c-8caa-66bc442d199f","Type":"ContainerDied","Data":"bc85243f9648df10a7ce8e80ea141fb0689a697467300867d7668654a92195c4"} Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.704883 4982 scope.go:117] "RemoveContainer" containerID="f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.752687 4982 scope.go:117] "RemoveContainer" containerID="9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.837537 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-server-conf\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.837622 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/511c8aa0-4327-455c-8caa-66bc442d199f-pod-info\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.837698 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-erlang-cookie\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.838617 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.838695 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-tls\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.838779 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-confd\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.838829 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-plugins-conf\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.838892 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqn2v\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-kube-api-access-vqn2v\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.839040 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-config-data\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.839074 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/511c8aa0-4327-455c-8caa-66bc442d199f-erlang-cookie-secret\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.839168 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-plugins\") pod \"511c8aa0-4327-455c-8caa-66bc442d199f\" (UID: \"511c8aa0-4327-455c-8caa-66bc442d199f\") " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.847919 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/511c8aa0-4327-455c-8caa-66bc442d199f-pod-info" (OuterVolumeSpecName: "pod-info") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.848193 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.849083 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.850953 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.852573 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.855316 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-kube-api-access-vqn2v" (OuterVolumeSpecName: "kube-api-access-vqn2v") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "kube-api-access-vqn2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.871661 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511c8aa0-4327-455c-8caa-66bc442d199f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.874756 4982 scope.go:117] "RemoveContainer" containerID="f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53" Feb 24 15:20:26 crc kubenswrapper[4982]: E0224 15:20:26.875174 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53\": container with ID starting with f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53 not found: ID does not exist" containerID="f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.875219 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53"} err="failed to get container status \"f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53\": rpc error: code = NotFound desc = could not find container \"f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53\": container with ID starting with f913f463c11fc370c41a87262e32af4110bc709d44088733f0c7e787e14feb53 not found: ID does not exist" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.875245 4982 scope.go:117] "RemoveContainer" containerID="9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179" Feb 24 15:20:26 crc kubenswrapper[4982]: E0224 15:20:26.875512 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179\": container with ID starting with 9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179 not found: ID does not exist" containerID="9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.875532 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179"} err="failed to get container status \"9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179\": rpc error: code = NotFound desc = could not find container \"9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179\": container with ID starting with 9a0697b409cd780a0ad8d10a7de3e4211ef39b72bf8d6de201c96a50de40a179 not found: ID does not exist" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.881786 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86" (OuterVolumeSpecName: "persistence") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "pvc-f6339ee7-77ff-4e69-843d-aa663c607c86". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.900020 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-config-data" (OuterVolumeSpecName: "config-data") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.931791 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-server-conf" (OuterVolumeSpecName: "server-conf") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946066 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946108 4982 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/511c8aa0-4327-455c-8caa-66bc442d199f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946121 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946134 4982 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946144 4982 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/511c8aa0-4327-455c-8caa-66bc442d199f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946157 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946190 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") on node \"crc\" " Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946202 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946211 4982 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/511c8aa0-4327-455c-8caa-66bc442d199f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.946222 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqn2v\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-kube-api-access-vqn2v\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.992906 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 15:20:26 crc kubenswrapper[4982]: I0224 15:20:26.993585 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f6339ee7-77ff-4e69-843d-aa663c607c86" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86") on node "crc" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.017819 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "511c8aa0-4327-455c-8caa-66bc442d199f" (UID: "511c8aa0-4327-455c-8caa-66bc442d199f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.048718 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.048772 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/511c8aa0-4327-455c-8caa-66bc442d199f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.351649 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.364128 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.380737 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:20:27 crc kubenswrapper[4982]: E0224 15:20:27.381284 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="registry-server" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.381312 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="registry-server" Feb 24 15:20:27 crc kubenswrapper[4982]: E0224 15:20:27.381329 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="extract-utilities" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.381339 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="extract-utilities" Feb 24 15:20:27 crc kubenswrapper[4982]: E0224 15:20:27.381364 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5837cfa1-d5c4-468c-b2a9-75042b7ad1a8" containerName="oc" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.381372 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5837cfa1-d5c4-468c-b2a9-75042b7ad1a8" containerName="oc" Feb 24 15:20:27 crc kubenswrapper[4982]: E0224 15:20:27.381395 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="extract-content" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.381405 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="extract-content" Feb 24 15:20:27 crc kubenswrapper[4982]: E0224 15:20:27.381443 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" containerName="rabbitmq" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.381451 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" containerName="rabbitmq" Feb 24 15:20:27 crc kubenswrapper[4982]: E0224 15:20:27.381467 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" containerName="setup-container" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.381476 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" containerName="setup-container" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.382525 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da0cdff-8112-479a-adf4-c8fbb3280bb3" containerName="registry-server" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.382575 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" containerName="rabbitmq" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.382585 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5837cfa1-d5c4-468c-b2a9-75042b7ad1a8" containerName="oc" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.384124 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.399679 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.457884 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5m4t\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-kube-api-access-b5m4t\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.457940 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.458084 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.458159 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.458185 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0c9d67-9dca-4bd7-bd58-fa6185479916-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.458335 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.458561 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.458612 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.458649 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.458841 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0c9d67-9dca-4bd7-bd58-fa6185479916-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.459216 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.561757 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.561843 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.561873 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.561918 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0c9d67-9dca-4bd7-bd58-fa6185479916-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.562035 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.562630 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5m4t\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-kube-api-access-b5m4t\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.562492 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.562566 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.562712 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.562758 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.562875 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.562955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.563008 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0c9d67-9dca-4bd7-bd58-fa6185479916-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.563086 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.564114 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-config-data\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.564614 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec0c9d67-9dca-4bd7-bd58-fa6185479916-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.567063 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.567108 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1644dd700f873427ddf54e59bc76644eeebe7e9509c2e4034b86df93b2f0369d/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.567978 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.568134 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.569438 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec0c9d67-9dca-4bd7-bd58-fa6185479916-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.587001 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec0c9d67-9dca-4bd7-bd58-fa6185479916-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.589150 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5m4t\" (UniqueName: \"kubernetes.io/projected/ec0c9d67-9dca-4bd7-bd58-fa6185479916-kube-api-access-b5m4t\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.655845 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6339ee7-77ff-4e69-843d-aa663c607c86\") pod \"rabbitmq-server-0\" (UID: \"ec0c9d67-9dca-4bd7-bd58-fa6185479916\") " pod="openstack/rabbitmq-server-0" Feb 24 15:20:27 crc kubenswrapper[4982]: I0224 15:20:27.707209 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 15:20:28 crc kubenswrapper[4982]: I0224 15:20:28.314890 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 15:20:28 crc kubenswrapper[4982]: I0224 15:20:28.508966 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:20:28 crc kubenswrapper[4982]: I0224 15:20:28.578208 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:20:28 crc kubenswrapper[4982]: I0224 15:20:28.732536 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec0c9d67-9dca-4bd7-bd58-fa6185479916","Type":"ContainerStarted","Data":"f884a67bce58a3d4885f7ceec3cb47be9b582b8e5ff62378bef029e7bffdc59d"} Feb 24 15:20:29 crc kubenswrapper[4982]: I0224 15:20:29.164723 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511c8aa0-4327-455c-8caa-66bc442d199f" path="/var/lib/kubelet/pods/511c8aa0-4327-455c-8caa-66bc442d199f/volumes" Feb 24 15:20:30 crc kubenswrapper[4982]: I0224 15:20:30.798116 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ln5h9"] Feb 24 15:20:30 crc kubenswrapper[4982]: I0224 15:20:30.798404 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ln5h9" podUID="acced837-267d-459e-966d-e479ff357561" containerName="registry-server" containerID="cri-o://78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c" gracePeriod=2 Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.396263 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.467918 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kvzj\" (UniqueName: \"kubernetes.io/projected/acced837-267d-459e-966d-e479ff357561-kube-api-access-5kvzj\") pod \"acced837-267d-459e-966d-e479ff357561\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.467987 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-utilities\") pod \"acced837-267d-459e-966d-e479ff357561\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.468177 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-catalog-content\") pod \"acced837-267d-459e-966d-e479ff357561\" (UID: \"acced837-267d-459e-966d-e479ff357561\") " Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.469050 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-utilities" (OuterVolumeSpecName: "utilities") pod "acced837-267d-459e-966d-e479ff357561" (UID: "acced837-267d-459e-966d-e479ff357561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.477597 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acced837-267d-459e-966d-e479ff357561-kube-api-access-5kvzj" (OuterVolumeSpecName: "kube-api-access-5kvzj") pod "acced837-267d-459e-966d-e479ff357561" (UID: "acced837-267d-459e-966d-e479ff357561"). InnerVolumeSpecName "kube-api-access-5kvzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.572347 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kvzj\" (UniqueName: \"kubernetes.io/projected/acced837-267d-459e-966d-e479ff357561-kube-api-access-5kvzj\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.572403 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.600045 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acced837-267d-459e-966d-e479ff357561" (UID: "acced837-267d-459e-966d-e479ff357561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.675153 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acced837-267d-459e-966d-e479ff357561-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.771101 4982 generic.go:334] "Generic (PLEG): container finished" podID="acced837-267d-459e-966d-e479ff357561" containerID="78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c" exitCode=0 Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.771173 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln5h9" event={"ID":"acced837-267d-459e-966d-e479ff357561","Type":"ContainerDied","Data":"78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c"} Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.771202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln5h9" event={"ID":"acced837-267d-459e-966d-e479ff357561","Type":"ContainerDied","Data":"8a3cf2780daeb16f577ca4e4cc70be8011574cc77e693ef382c2477f64f6c0b6"} Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.771197 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ln5h9" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.771237 4982 scope.go:117] "RemoveContainer" containerID="78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.774425 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec0c9d67-9dca-4bd7-bd58-fa6185479916","Type":"ContainerStarted","Data":"6f9e93fb094d42000f42f8ac6b6f1547aa4b3f4ab8abc827e8b5323030951c0a"} Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.829116 4982 scope.go:117] "RemoveContainer" containerID="18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.845346 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ln5h9"] Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.857540 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ln5h9"] Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.862774 4982 scope.go:117] "RemoveContainer" containerID="ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.915124 4982 scope.go:117] "RemoveContainer" containerID="78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c" Feb 24 15:20:31 crc kubenswrapper[4982]: E0224 15:20:31.915614 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c\": container with ID starting with 78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c not found: ID does not exist" containerID="78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.915648 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c"} err="failed to get container status \"78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c\": rpc error: code = NotFound desc = could not find container \"78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c\": container with ID starting with 78a1c20e9473d1e47d8e6e76d66772ded81a661677eaf1b38d94104f806c675c not found: ID does not exist" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.915672 4982 scope.go:117] "RemoveContainer" containerID="18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177" Feb 24 15:20:31 crc kubenswrapper[4982]: E0224 15:20:31.916089 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177\": container with ID starting with 18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177 not found: ID does not exist" containerID="18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.916129 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177"} err="failed to get container status \"18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177\": rpc error: code = NotFound desc = could not find container \"18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177\": container with ID starting with 18ba8820962361c9634ec5ab9aa7c28e978fdc93c5382104447d50a05ebb5177 not found: ID does not exist" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.916144 4982 scope.go:117] "RemoveContainer" containerID="ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85" Feb 24 15:20:31 crc kubenswrapper[4982]: E0224 15:20:31.916555 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85\": container with ID starting with ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85 not found: ID does not exist" containerID="ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85" Feb 24 15:20:31 crc kubenswrapper[4982]: I0224 15:20:31.916581 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85"} err="failed to get container status \"ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85\": rpc error: code = NotFound desc = could not find container \"ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85\": container with ID starting with ed981343adb9f837e43dde2b7b5a45cdcc633bfef4a6d8dd565064bb8a11ec85 not found: ID does not exist" Feb 24 15:20:33 crc kubenswrapper[4982]: I0224 15:20:33.147042 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:20:33 crc kubenswrapper[4982]: E0224 15:20:33.148043 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:20:33 crc kubenswrapper[4982]: I0224 15:20:33.159705 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acced837-267d-459e-966d-e479ff357561" path="/var/lib/kubelet/pods/acced837-267d-459e-966d-e479ff357561/volumes" Feb 24 15:20:43 crc kubenswrapper[4982]: I0224 15:20:43.600894 4982 scope.go:117] "RemoveContainer" containerID="8c1d16c9e4dea6ffd09e69a8010ba0cfa359df15d24558375d3d5feb0da4b9bb" Feb 24 15:20:43 crc kubenswrapper[4982]: I0224 15:20:43.642335 4982 scope.go:117] "RemoveContainer" containerID="0ddd0fa30d31a5ee8f00ee221550cb7d465a0c78fb0b81e27eb57047a390b583" Feb 24 15:20:43 crc kubenswrapper[4982]: I0224 15:20:43.687968 4982 scope.go:117] "RemoveContainer" containerID="6edac1320a3854fa2a5629c14c686f18e776ae387ac8068bf88ec54c5e861229" Feb 24 15:20:43 crc kubenswrapper[4982]: I0224 15:20:43.726729 4982 scope.go:117] "RemoveContainer" containerID="487268aa12812e782accb6056345b31223df0c570dcf4cd02d6f35a84ac9ff01" Feb 24 15:20:43 crc kubenswrapper[4982]: I0224 15:20:43.787685 4982 scope.go:117] "RemoveContainer" containerID="b6896e14f215ce0653b4a1acfa503455ce4c8d01e117663140c06e0178677844" Feb 24 15:20:44 crc kubenswrapper[4982]: I0224 15:20:44.146339 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:20:44 crc kubenswrapper[4982]: E0224 15:20:44.146989 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:20:57 crc kubenswrapper[4982]: I0224 15:20:57.145590 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:20:57 crc kubenswrapper[4982]: E0224 15:20:57.146367 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:21:03 crc kubenswrapper[4982]: I0224 15:21:03.228470 4982 generic.go:334] "Generic (PLEG): container finished" podID="ec0c9d67-9dca-4bd7-bd58-fa6185479916" containerID="6f9e93fb094d42000f42f8ac6b6f1547aa4b3f4ab8abc827e8b5323030951c0a" exitCode=0 Feb 24 15:21:03 crc kubenswrapper[4982]: I0224 15:21:03.228593 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec0c9d67-9dca-4bd7-bd58-fa6185479916","Type":"ContainerDied","Data":"6f9e93fb094d42000f42f8ac6b6f1547aa4b3f4ab8abc827e8b5323030951c0a"} Feb 24 15:21:04 crc kubenswrapper[4982]: I0224 15:21:04.243755 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ec0c9d67-9dca-4bd7-bd58-fa6185479916","Type":"ContainerStarted","Data":"84eb32152953f0f1e9e82158ee910958d9e6e35cfab8e385d9d35f463d2a7270"} Feb 24 15:21:04 crc kubenswrapper[4982]: I0224 15:21:04.244532 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 24 15:21:04 crc kubenswrapper[4982]: I0224 15:21:04.288989 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.288968717 podStartE2EDuration="37.288968717s" podCreationTimestamp="2026-02-24 15:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:21:04.273906768 +0000 UTC m=+1925.892965271" watchObservedRunningTime="2026-02-24 15:21:04.288968717 +0000 UTC m=+1925.908027230" Feb 24 15:21:12 crc kubenswrapper[4982]: I0224 15:21:12.145788 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:21:12 crc kubenswrapper[4982]: E0224 15:21:12.147788 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:21:17 crc kubenswrapper[4982]: I0224 15:21:17.711706 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 24 15:21:26 crc kubenswrapper[4982]: I0224 15:21:26.146015 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:21:26 crc kubenswrapper[4982]: E0224 15:21:26.146691 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:21:40 crc kubenswrapper[4982]: I0224 15:21:40.147384 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:21:40 crc kubenswrapper[4982]: E0224 15:21:40.148910 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:21:43 crc kubenswrapper[4982]: I0224 15:21:43.985643 4982 scope.go:117] "RemoveContainer" containerID="ba973e68f7e02b63d90d5d7c63472294f445f6b7fffd9e33bf086a4c932d7f9a" Feb 24 15:21:44 crc kubenswrapper[4982]: I0224 15:21:44.019127 4982 scope.go:117] "RemoveContainer" containerID="8550fcdfee5c08bee896fe98e4493bf90696d34e628bb777d34b6702dd9917f5" Feb 24 15:21:51 crc kubenswrapper[4982]: I0224 15:21:51.145696 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:21:51 crc kubenswrapper[4982]: E0224 15:21:51.146512 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.154699 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532442-hngb4"] Feb 24 15:22:00 crc kubenswrapper[4982]: E0224 15:22:00.157204 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acced837-267d-459e-966d-e479ff357561" containerName="extract-utilities" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.157341 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="acced837-267d-459e-966d-e479ff357561" containerName="extract-utilities" Feb 24 15:22:00 crc kubenswrapper[4982]: E0224 15:22:00.157467 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acced837-267d-459e-966d-e479ff357561" containerName="extract-content" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.157582 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="acced837-267d-459e-966d-e479ff357561" containerName="extract-content" Feb 24 15:22:00 crc kubenswrapper[4982]: E0224 15:22:00.157692 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acced837-267d-459e-966d-e479ff357561" containerName="registry-server" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.157788 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="acced837-267d-459e-966d-e479ff357561" containerName="registry-server" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.158287 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="acced837-267d-459e-966d-e479ff357561" containerName="registry-server" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.159807 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532442-hngb4" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.162769 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.163025 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.163220 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.169046 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532442-hngb4"] Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.335331 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb7ln\" (UniqueName: \"kubernetes.io/projected/2d51af88-ef7d-46e7-ab4b-d1ccf200a068-kube-api-access-mb7ln\") pod \"auto-csr-approver-29532442-hngb4\" (UID: \"2d51af88-ef7d-46e7-ab4b-d1ccf200a068\") " pod="openshift-infra/auto-csr-approver-29532442-hngb4" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.438043 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb7ln\" (UniqueName: \"kubernetes.io/projected/2d51af88-ef7d-46e7-ab4b-d1ccf200a068-kube-api-access-mb7ln\") pod \"auto-csr-approver-29532442-hngb4\" (UID: \"2d51af88-ef7d-46e7-ab4b-d1ccf200a068\") " pod="openshift-infra/auto-csr-approver-29532442-hngb4" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.460283 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb7ln\" (UniqueName: \"kubernetes.io/projected/2d51af88-ef7d-46e7-ab4b-d1ccf200a068-kube-api-access-mb7ln\") pod \"auto-csr-approver-29532442-hngb4\" (UID: \"2d51af88-ef7d-46e7-ab4b-d1ccf200a068\") " pod="openshift-infra/auto-csr-approver-29532442-hngb4" Feb 24 15:22:00 crc kubenswrapper[4982]: I0224 15:22:00.477880 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532442-hngb4" Feb 24 15:22:01 crc kubenswrapper[4982]: I0224 15:22:01.017606 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532442-hngb4"] Feb 24 15:22:01 crc kubenswrapper[4982]: I0224 15:22:01.965565 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532442-hngb4" event={"ID":"2d51af88-ef7d-46e7-ab4b-d1ccf200a068","Type":"ContainerStarted","Data":"5f5514c920a7fe726a5a1439ce4031e6ae01c62283d2ce5c35fff8a783c15347"} Feb 24 15:22:02 crc kubenswrapper[4982]: I0224 15:22:02.976774 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532442-hngb4" event={"ID":"2d51af88-ef7d-46e7-ab4b-d1ccf200a068","Type":"ContainerStarted","Data":"8c0b9c3d57633e69505bf14a2d38b447dfff72838d448b8b95fbe43d9db99c4b"} Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.009572 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532442-hngb4" podStartSLOduration=1.618949481 podStartE2EDuration="3.00954905s" podCreationTimestamp="2026-02-24 15:22:00 +0000 UTC" firstStartedPulling="2026-02-24 15:22:01.009820515 +0000 UTC m=+1982.628879008" lastFinishedPulling="2026-02-24 15:22:02.400420064 +0000 UTC m=+1984.019478577" observedRunningTime="2026-02-24 15:22:02.996975199 +0000 UTC m=+1984.616033682" watchObservedRunningTime="2026-02-24 15:22:03.00954905 +0000 UTC m=+1984.628607543" Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.075287 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b0b6-account-create-update-zvgjp"] Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.088876 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pbbft"] Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.137942 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8fv24"] Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.166191 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-04b1-account-create-update-pknzw"] Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.166248 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b0b6-account-create-update-zvgjp"] Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.173879 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pbbft"] Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.185462 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8fv24"] Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.197478 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-04b1-account-create-update-pknzw"] Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.998654 4982 generic.go:334] "Generic (PLEG): container finished" podID="2d51af88-ef7d-46e7-ab4b-d1ccf200a068" containerID="8c0b9c3d57633e69505bf14a2d38b447dfff72838d448b8b95fbe43d9db99c4b" exitCode=0 Feb 24 15:22:03 crc kubenswrapper[4982]: I0224 15:22:03.998979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532442-hngb4" event={"ID":"2d51af88-ef7d-46e7-ab4b-d1ccf200a068","Type":"ContainerDied","Data":"8c0b9c3d57633e69505bf14a2d38b447dfff72838d448b8b95fbe43d9db99c4b"} Feb 24 15:22:04 crc kubenswrapper[4982]: I0224 15:22:04.065482 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7xwm"] Feb 24 15:22:04 crc kubenswrapper[4982]: I0224 15:22:04.076944 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-f7xwm"] Feb 24 15:22:04 crc kubenswrapper[4982]: I0224 15:22:04.089067 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-3d1b-account-create-update-zdkh5"] Feb 24 15:22:04 crc kubenswrapper[4982]: I0224 15:22:04.100370 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-3d1b-account-create-update-zdkh5"] Feb 24 15:22:04 crc kubenswrapper[4982]: I0224 15:22:04.145985 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:22:04 crc kubenswrapper[4982]: E0224 15:22:04.146232 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.163308 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2358d960-bc18-4459-af89-9b2c0ae9081b" path="/var/lib/kubelet/pods/2358d960-bc18-4459-af89-9b2c0ae9081b/volumes" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.165191 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99273cd9-0c56-4024-9df0-e8a8294bb346" path="/var/lib/kubelet/pods/99273cd9-0c56-4024-9df0-e8a8294bb346/volumes" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.166325 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d043fa3e-3790-470f-8b7d-743969cede03" path="/var/lib/kubelet/pods/d043fa3e-3790-470f-8b7d-743969cede03/volumes" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.168144 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66ea93a-f4f2-47c2-a9da-27a82afa75bd" path="/var/lib/kubelet/pods/d66ea93a-f4f2-47c2-a9da-27a82afa75bd/volumes" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.170969 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0c055f-9963-4ca0-8a2b-dfda0d72f960" path="/var/lib/kubelet/pods/ed0c055f-9963-4ca0-8a2b-dfda0d72f960/volumes" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.172556 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd310a49-0087-4874-82ba-63f6d002bf18" path="/var/lib/kubelet/pods/fd310a49-0087-4874-82ba-63f6d002bf18/volumes" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.487559 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532442-hngb4" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.584202 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb7ln\" (UniqueName: \"kubernetes.io/projected/2d51af88-ef7d-46e7-ab4b-d1ccf200a068-kube-api-access-mb7ln\") pod \"2d51af88-ef7d-46e7-ab4b-d1ccf200a068\" (UID: \"2d51af88-ef7d-46e7-ab4b-d1ccf200a068\") " Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.591594 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d51af88-ef7d-46e7-ab4b-d1ccf200a068-kube-api-access-mb7ln" (OuterVolumeSpecName: "kube-api-access-mb7ln") pod "2d51af88-ef7d-46e7-ab4b-d1ccf200a068" (UID: "2d51af88-ef7d-46e7-ab4b-d1ccf200a068"). InnerVolumeSpecName "kube-api-access-mb7ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:22:05 crc kubenswrapper[4982]: I0224 15:22:05.688589 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb7ln\" (UniqueName: \"kubernetes.io/projected/2d51af88-ef7d-46e7-ab4b-d1ccf200a068-kube-api-access-mb7ln\") on node \"crc\" DevicePath \"\"" Feb 24 15:22:06 crc kubenswrapper[4982]: I0224 15:22:06.035878 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532442-hngb4" event={"ID":"2d51af88-ef7d-46e7-ab4b-d1ccf200a068","Type":"ContainerDied","Data":"5f5514c920a7fe726a5a1439ce4031e6ae01c62283d2ce5c35fff8a783c15347"} Feb 24 15:22:06 crc kubenswrapper[4982]: I0224 15:22:06.035930 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f5514c920a7fe726a5a1439ce4031e6ae01c62283d2ce5c35fff8a783c15347" Feb 24 15:22:06 crc kubenswrapper[4982]: I0224 15:22:06.035957 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532442-hngb4" Feb 24 15:22:06 crc kubenswrapper[4982]: I0224 15:22:06.079749 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532436-hqpch"] Feb 24 15:22:06 crc kubenswrapper[4982]: I0224 15:22:06.096340 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532436-hqpch"] Feb 24 15:22:07 crc kubenswrapper[4982]: I0224 15:22:07.160532 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e218d80b-e585-4a84-8d6f-d68d54b5bc20" path="/var/lib/kubelet/pods/e218d80b-e585-4a84-8d6f-d68d54b5bc20/volumes" Feb 24 15:22:12 crc kubenswrapper[4982]: I0224 15:22:12.109844 4982 generic.go:334] "Generic (PLEG): container finished" podID="ff453eac-e860-4c72-9c3c-0aa80e0554d1" containerID="a5cb81c46f0b9b498364daff21f23403752349dbc1bff882beb9d717c612a12b" exitCode=0 Feb 24 15:22:12 crc kubenswrapper[4982]: I0224 15:22:12.109921 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" event={"ID":"ff453eac-e860-4c72-9c3c-0aa80e0554d1","Type":"ContainerDied","Data":"a5cb81c46f0b9b498364daff21f23403752349dbc1bff882beb9d717c612a12b"} Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.046101 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f120-account-create-update-7rgzv"] Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.057181 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f120-account-create-update-7rgzv"] Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.068759 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xr68t"] Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.079345 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xr68t"] Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.163006 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5" path="/var/lib/kubelet/pods/94ce89c7-16c7-4d7c-9750-35fcbd5f1ea5/volumes" Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.164338 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec3ae4f-cea7-465c-b582-d075b477d5d0" path="/var/lib/kubelet/pods/dec3ae4f-cea7-465c-b582-d075b477d5d0/volumes" Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.658895 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.781162 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-inventory\") pod \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.781261 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-bootstrap-combined-ca-bundle\") pod \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.781532 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-ssh-key-openstack-edpm-ipam\") pod \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.781809 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxj98\" (UniqueName: \"kubernetes.io/projected/ff453eac-e860-4c72-9c3c-0aa80e0554d1-kube-api-access-cxj98\") pod \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.786950 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ff453eac-e860-4c72-9c3c-0aa80e0554d1" (UID: "ff453eac-e860-4c72-9c3c-0aa80e0554d1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.789183 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff453eac-e860-4c72-9c3c-0aa80e0554d1-kube-api-access-cxj98" (OuterVolumeSpecName: "kube-api-access-cxj98") pod "ff453eac-e860-4c72-9c3c-0aa80e0554d1" (UID: "ff453eac-e860-4c72-9c3c-0aa80e0554d1"). InnerVolumeSpecName "kube-api-access-cxj98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.886532 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-inventory" (OuterVolumeSpecName: "inventory") pod "ff453eac-e860-4c72-9c3c-0aa80e0554d1" (UID: "ff453eac-e860-4c72-9c3c-0aa80e0554d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.901025 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-inventory\") pod \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\" (UID: \"ff453eac-e860-4c72-9c3c-0aa80e0554d1\") " Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.902601 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxj98\" (UniqueName: \"kubernetes.io/projected/ff453eac-e860-4c72-9c3c-0aa80e0554d1-kube-api-access-cxj98\") on node \"crc\" DevicePath \"\"" Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.902637 4982 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:22:13 crc kubenswrapper[4982]: W0224 15:22:13.902766 4982 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ff453eac-e860-4c72-9c3c-0aa80e0554d1/volumes/kubernetes.io~secret/inventory Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.902786 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-inventory" (OuterVolumeSpecName: "inventory") pod "ff453eac-e860-4c72-9c3c-0aa80e0554d1" (UID: "ff453eac-e860-4c72-9c3c-0aa80e0554d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:22:13 crc kubenswrapper[4982]: I0224 15:22:13.907437 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff453eac-e860-4c72-9c3c-0aa80e0554d1" (UID: "ff453eac-e860-4c72-9c3c-0aa80e0554d1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.004792 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.004827 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff453eac-e860-4c72-9c3c-0aa80e0554d1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.138478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" event={"ID":"ff453eac-e860-4c72-9c3c-0aa80e0554d1","Type":"ContainerDied","Data":"afb3d6a762774a4584116678e47f3a57321d36e6e621f0a9f7038c47337649dc"} Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.138565 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb3d6a762774a4584116678e47f3a57321d36e6e621f0a9f7038c47337649dc" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.138566 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.224384 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2"] Feb 24 15:22:14 crc kubenswrapper[4982]: E0224 15:22:14.224980 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff453eac-e860-4c72-9c3c-0aa80e0554d1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.224998 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff453eac-e860-4c72-9c3c-0aa80e0554d1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 24 15:22:14 crc kubenswrapper[4982]: E0224 15:22:14.225053 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d51af88-ef7d-46e7-ab4b-d1ccf200a068" containerName="oc" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.225064 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d51af88-ef7d-46e7-ab4b-d1ccf200a068" containerName="oc" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.225400 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d51af88-ef7d-46e7-ab4b-d1ccf200a068" containerName="oc" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.225435 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff453eac-e860-4c72-9c3c-0aa80e0554d1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.226485 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.230644 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.230669 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.232977 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.233439 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.263319 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2"] Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.416673 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.416772 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v52v\" (UniqueName: \"kubernetes.io/projected/b7210463-d869-4f1c-8d7b-60985c525f58-kube-api-access-4v52v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.416856 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.520864 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v52v\" (UniqueName: \"kubernetes.io/projected/b7210463-d869-4f1c-8d7b-60985c525f58-kube-api-access-4v52v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.521138 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.521492 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.527904 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.528987 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.541751 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v52v\" (UniqueName: \"kubernetes.io/projected/b7210463-d869-4f1c-8d7b-60985c525f58-kube-api-access-4v52v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:14 crc kubenswrapper[4982]: I0224 15:22:14.548033 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:22:15 crc kubenswrapper[4982]: I0224 15:22:15.268579 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2"] Feb 24 15:22:15 crc kubenswrapper[4982]: W0224 15:22:15.269780 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7210463_d869_4f1c_8d7b_60985c525f58.slice/crio-9df75937834245d9fd4d4e49364dd0b70f4b45d759846c29bffd08db81b052ce WatchSource:0}: Error finding container 9df75937834245d9fd4d4e49364dd0b70f4b45d759846c29bffd08db81b052ce: Status 404 returned error can't find the container with id 9df75937834245d9fd4d4e49364dd0b70f4b45d759846c29bffd08db81b052ce Feb 24 15:22:16 crc kubenswrapper[4982]: I0224 15:22:16.039214 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-46cd-account-create-update-q4m5c"] Feb 24 15:22:16 crc kubenswrapper[4982]: I0224 15:22:16.053077 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr"] Feb 24 15:22:16 crc kubenswrapper[4982]: I0224 15:22:16.066245 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-46cd-account-create-update-q4m5c"] Feb 24 15:22:16 crc kubenswrapper[4982]: I0224 15:22:16.095110 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-4nnfr"] Feb 24 15:22:16 crc kubenswrapper[4982]: I0224 15:22:16.168089 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" event={"ID":"b7210463-d869-4f1c-8d7b-60985c525f58","Type":"ContainerStarted","Data":"b0007a3b98326845bdbf173ddb2bf8f019a020c336f384406823097ab51d5b6f"} Feb 24 15:22:16 crc kubenswrapper[4982]: I0224 15:22:16.168128 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" event={"ID":"b7210463-d869-4f1c-8d7b-60985c525f58","Type":"ContainerStarted","Data":"9df75937834245d9fd4d4e49364dd0b70f4b45d759846c29bffd08db81b052ce"} Feb 24 15:22:17 crc kubenswrapper[4982]: I0224 15:22:17.145731 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:22:17 crc kubenswrapper[4982]: E0224 15:22:17.146306 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:22:17 crc kubenswrapper[4982]: I0224 15:22:17.164160 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f12dc6-7718-4641-9b56-85236953f8e2" path="/var/lib/kubelet/pods/25f12dc6-7718-4641-9b56-85236953f8e2/volumes" Feb 24 15:22:17 crc kubenswrapper[4982]: I0224 15:22:17.165431 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91ac633-b13c-47df-baf7-eca2759210fa" path="/var/lib/kubelet/pods/a91ac633-b13c-47df-baf7-eca2759210fa/volumes" Feb 24 15:22:20 crc kubenswrapper[4982]: I0224 15:22:20.030313 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" podStartSLOduration=5.640841145 podStartE2EDuration="6.030293327s" podCreationTimestamp="2026-02-24 15:22:14 +0000 UTC" firstStartedPulling="2026-02-24 15:22:15.272903073 +0000 UTC m=+1996.891961566" lastFinishedPulling="2026-02-24 15:22:15.662355255 +0000 UTC m=+1997.281413748" observedRunningTime="2026-02-24 15:22:16.195758934 +0000 UTC m=+1997.814817427" watchObservedRunningTime="2026-02-24 15:22:20.030293327 +0000 UTC m=+2001.649351830" Feb 24 15:22:20 crc kubenswrapper[4982]: I0224 15:22:20.035028 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9r4vk"] Feb 24 15:22:20 crc kubenswrapper[4982]: I0224 15:22:20.048644 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9r4vk"] Feb 24 15:22:21 crc kubenswrapper[4982]: I0224 15:22:21.158369 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668ccd5d-5607-4cb2-832c-54aa8960ff2b" path="/var/lib/kubelet/pods/668ccd5d-5607-4cb2-832c-54aa8960ff2b/volumes" Feb 24 15:22:31 crc kubenswrapper[4982]: I0224 15:22:31.145973 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:22:31 crc kubenswrapper[4982]: E0224 15:22:31.146813 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:22:43 crc kubenswrapper[4982]: I0224 15:22:43.147120 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:22:43 crc kubenswrapper[4982]: E0224 15:22:43.148024 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.127074 4982 scope.go:117] "RemoveContainer" containerID="c6d509fc0b84ed176c02a5ddca1095c01e86815fa2b8db93a147d5cdf85730e5" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.164083 4982 scope.go:117] "RemoveContainer" containerID="465d0f849753128a5488b00177622240b7a727f0452af3e8ae08f6833bbd8e55" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.192895 4982 scope.go:117] "RemoveContainer" containerID="06ec03fc44f588987a1b5fed74a2340c9f6627c1fbe206cd3373c5373247737b" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.274625 4982 scope.go:117] "RemoveContainer" containerID="8b4b4b11ec728b191fe67baf54fb0cb8e34e5e37007ad18bf768df8e3e304e0d" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.325968 4982 scope.go:117] "RemoveContainer" containerID="7ad36b7044bc73c2586c165e2610953a3c3880189b937af0933740f312ef562e" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.356865 4982 scope.go:117] "RemoveContainer" containerID="032fba7567c1016ef236ea5ea23e744188a41d03457e9153b5a72554c6770d7e" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.409780 4982 scope.go:117] "RemoveContainer" containerID="812d1cd686345baf4ab46b6a6e0f1acebd9b5941d9c05682c6b07ffed7e3511d" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.486295 4982 scope.go:117] "RemoveContainer" containerID="872a38f2f62e8e179addec5995688c6a9c7e58021a67c8125908076eb34d1d8d" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.530096 4982 scope.go:117] "RemoveContainer" containerID="65d5456fd15006a55ef32dad2687d336952977f5d65d2c10c5a46b44b1dca752" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.594710 4982 scope.go:117] "RemoveContainer" containerID="13af9be5ac39e1a78805e550bdd61153f48819b38ad63336178d6bc9d30683ce" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.640911 4982 scope.go:117] "RemoveContainer" containerID="705f5914fdf329f0ea27eb836aba84f28c53efc4b0abee84f5d306a8ff43abb7" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.677990 4982 scope.go:117] "RemoveContainer" containerID="f72ede8de238ef96f0e0f57f8ef1e6c315e66f30450043c5a0e50d039ff5451b" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.703308 4982 scope.go:117] "RemoveContainer" containerID="99561028cf3107e0e1773a0cbb9bd019fc00c210cec2558faae75c76f45e4914" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.729980 4982 scope.go:117] "RemoveContainer" containerID="21c21d397aa948744692f6623a1e0ca215bd848d0946acd33a68455ae2741bf1" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.759261 4982 scope.go:117] "RemoveContainer" containerID="7fd97204f45901caa85fdf30221e98d8d0d49a7113d2558839e59f70ab081898" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.786815 4982 scope.go:117] "RemoveContainer" containerID="4d89f75122c102bbdbc2352cbb7d69a9d8a3c57945159d7da1012638cdb59262" Feb 24 15:22:44 crc kubenswrapper[4982]: I0224 15:22:44.810637 4982 scope.go:117] "RemoveContainer" containerID="7040ec195045fad266d02bb9c64d6b39c1e00ef0b4ab49c83dae1775a1e36ce3" Feb 24 15:22:48 crc kubenswrapper[4982]: I0224 15:22:48.061267 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-flp5g"] Feb 24 15:22:48 crc kubenswrapper[4982]: I0224 15:22:48.086585 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-c7jsl"] Feb 24 15:22:48 crc kubenswrapper[4982]: I0224 15:22:48.101926 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-affe-account-create-update-22rz7"] Feb 24 15:22:48 crc kubenswrapper[4982]: I0224 15:22:48.121897 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-flp5g"] Feb 24 15:22:48 crc kubenswrapper[4982]: I0224 15:22:48.133425 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-h2snk"] Feb 24 15:22:48 crc kubenswrapper[4982]: I0224 15:22:48.148666 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-affe-account-create-update-22rz7"] Feb 24 15:22:48 crc kubenswrapper[4982]: I0224 15:22:48.159351 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-c7jsl"] Feb 24 15:22:48 crc kubenswrapper[4982]: I0224 15:22:48.170479 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-h2snk"] Feb 24 15:22:49 crc kubenswrapper[4982]: I0224 15:22:49.198784 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21152da4-0b6c-43d5-979f-6178105a507a" path="/var/lib/kubelet/pods/21152da4-0b6c-43d5-979f-6178105a507a/volumes" Feb 24 15:22:49 crc kubenswrapper[4982]: I0224 15:22:49.202117 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d299bdd-7d22-4c41-8dbf-c09b655e2705" path="/var/lib/kubelet/pods/6d299bdd-7d22-4c41-8dbf-c09b655e2705/volumes" Feb 24 15:22:49 crc kubenswrapper[4982]: I0224 15:22:49.204020 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa862be-6ab3-4fea-85d4-f08b59e0dbc8" path="/var/lib/kubelet/pods/aaa862be-6ab3-4fea-85d4-f08b59e0dbc8/volumes" Feb 24 15:22:49 crc kubenswrapper[4982]: I0224 15:22:49.206509 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2de08c7-91d6-4dd1-af64-cc09f7f43e0c" path="/var/lib/kubelet/pods/b2de08c7-91d6-4dd1-af64-cc09f7f43e0c/volumes" Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.039321 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e5cf-account-create-update-l6nx8"] Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.065244 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-92mf8"] Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.089910 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-cca8-account-create-update-q9fwj"] Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.110376 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e5cf-account-create-update-l6nx8"] Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.128681 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-92mf8"] Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.141484 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-cca8-account-create-update-q9fwj"] Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.163187 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b424dbf-bc16-404a-806e-7be5855b43c8" path="/var/lib/kubelet/pods/0b424dbf-bc16-404a-806e-7be5855b43c8/volumes" Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.164483 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40519aa0-b7b2-4e5c-898c-b73365c9d8f0" path="/var/lib/kubelet/pods/40519aa0-b7b2-4e5c-898c-b73365c9d8f0/volumes" Feb 24 15:22:51 crc kubenswrapper[4982]: I0224 15:22:51.166658 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e0de90-6da0-41e9-9b1d-f0fd256d010c" path="/var/lib/kubelet/pods/83e0de90-6da0-41e9-9b1d-f0fd256d010c/volumes" Feb 24 15:22:52 crc kubenswrapper[4982]: I0224 15:22:52.033163 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-fd44-account-create-update-xsf4h"] Feb 24 15:22:52 crc kubenswrapper[4982]: I0224 15:22:52.044074 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-fd44-account-create-update-xsf4h"] Feb 24 15:22:53 crc kubenswrapper[4982]: I0224 15:22:53.159846 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd4a2df-971c-4ebe-9434-01c1ae4f55d8" path="/var/lib/kubelet/pods/7dd4a2df-971c-4ebe-9434-01c1ae4f55d8/volumes" Feb 24 15:22:57 crc kubenswrapper[4982]: I0224 15:22:57.041839 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dsc9t"] Feb 24 15:22:57 crc kubenswrapper[4982]: I0224 15:22:57.059996 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dsc9t"] Feb 24 15:22:57 crc kubenswrapper[4982]: I0224 15:22:57.158331 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1201af6-076e-430b-adb8-3699f6296afe" path="/var/lib/kubelet/pods/b1201af6-076e-430b-adb8-3699f6296afe/volumes" Feb 24 15:22:58 crc kubenswrapper[4982]: I0224 15:22:58.146056 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:22:58 crc kubenswrapper[4982]: E0224 15:22:58.146561 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:23:12 crc kubenswrapper[4982]: I0224 15:23:12.145607 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:23:12 crc kubenswrapper[4982]: I0224 15:23:12.900905 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"ec6240805dd5fe25be3a90815c44ef652ece8f5ebedbe9f3e922c92054ae3159"} Feb 24 15:23:18 crc kubenswrapper[4982]: I0224 15:23:18.055774 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-82562"] Feb 24 15:23:18 crc kubenswrapper[4982]: I0224 15:23:18.067723 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-82562"] Feb 24 15:23:19 crc kubenswrapper[4982]: I0224 15:23:19.162958 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed4027c-742c-4789-9fad-fc912c419d6d" path="/var/lib/kubelet/pods/1ed4027c-742c-4789-9fad-fc912c419d6d/volumes" Feb 24 15:23:35 crc kubenswrapper[4982]: I0224 15:23:35.033069 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bzq42"] Feb 24 15:23:35 crc kubenswrapper[4982]: I0224 15:23:35.044804 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bzq42"] Feb 24 15:23:35 crc kubenswrapper[4982]: I0224 15:23:35.158807 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4cff74-7cc3-4470-8caf-32c6fbfb437b" path="/var/lib/kubelet/pods/ee4cff74-7cc3-4470-8caf-32c6fbfb437b/volumes" Feb 24 15:23:44 crc kubenswrapper[4982]: I0224 15:23:44.059565 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cq29z"] Feb 24 15:23:44 crc kubenswrapper[4982]: I0224 15:23:44.077122 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s55kw"] Feb 24 15:23:44 crc kubenswrapper[4982]: I0224 15:23:44.092787 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wwkwb"] Feb 24 15:23:44 crc kubenswrapper[4982]: I0224 15:23:44.105412 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cq29z"] Feb 24 15:23:44 crc kubenswrapper[4982]: I0224 15:23:44.116305 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s55kw"] Feb 24 15:23:44 crc kubenswrapper[4982]: I0224 15:23:44.130088 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wwkwb"] Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.160671 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b34e34-d603-4df9-a028-0169bf57fae7" path="/var/lib/kubelet/pods/04b34e34-d603-4df9-a028-0169bf57fae7/volumes" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.161775 4982 scope.go:117] "RemoveContainer" containerID="a28fa4c1c14ee8ba67d786e0aa16a7fc209bc0822cbefff9c60a9acc2b2ab21d" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.164361 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b24785f-aeec-4512-8f01-b0e1fc31a2b4" path="/var/lib/kubelet/pods/2b24785f-aeec-4512-8f01-b0e1fc31a2b4/volumes" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.167558 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d53de54-d42e-4095-95b0-df6db57c2106" path="/var/lib/kubelet/pods/8d53de54-d42e-4095-95b0-df6db57c2106/volumes" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.215089 4982 scope.go:117] "RemoveContainer" containerID="acb127858722980d1d59986ecc9ed697b53863bdb064a31dfd224ef83ae4d78d" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.259888 4982 scope.go:117] "RemoveContainer" containerID="77db16a9a7024e3405855d44796cb4b2bd9a923a4ebc9c96c95a7eddeee27918" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.348454 4982 scope.go:117] "RemoveContainer" containerID="ac54d93d444f97afb63a0a7b149d65d9f8f7f122655e37b5fce520dfe571c2cb" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.409328 4982 scope.go:117] "RemoveContainer" containerID="fe188739f98702b7c44f5de1b62011c6c9b35ab29531dbf95df0ac457d45e018" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.471060 4982 scope.go:117] "RemoveContainer" containerID="8abf25f75c60bd84462d1a0fdfedb71834204e717b1e235ba4196b1d3bf6dd9d" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.536097 4982 scope.go:117] "RemoveContainer" containerID="669c10dd2fdf5afa8e3a5942437580742fe649f035edb7476f428311dc5b1681" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.576716 4982 scope.go:117] "RemoveContainer" containerID="20656f755731f28474d24df51773671b95b6e7a395d817f5c2c3bffdd04504e4" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.604269 4982 scope.go:117] "RemoveContainer" containerID="cfab23c02d2b7f2459ada234ecc4460764c981373974b97c5e3a5006e8aefccd" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.644275 4982 scope.go:117] "RemoveContainer" containerID="16931269a19bccd00d82c8a5775b9ad1f391fd9ffd7950cbc51625588817cf81" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.678470 4982 scope.go:117] "RemoveContainer" containerID="360241317dd3ebde420bcbf84ed13b453b007121fa1dcb0fdef2634c061a1947" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.717886 4982 scope.go:117] "RemoveContainer" containerID="4b1ea4d2bcdcf25d33b0d93e969054a7ceecce78c81a96d7388ad07c54526a1d" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.740982 4982 scope.go:117] "RemoveContainer" containerID="b79c94e05afb859fb9752b28190d3f76dea1b12897f2c2aeaec063824cc287b8" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.766618 4982 scope.go:117] "RemoveContainer" containerID="8fa6882bfad1c4c27ba2e8449c7ea780e97711c3921be43f0718b818a97c4f72" Feb 24 15:23:45 crc kubenswrapper[4982]: I0224 15:23:45.806567 4982 scope.go:117] "RemoveContainer" containerID="bc7f358bfe408b8cf71a0900f97e8f26680ed0a0085a778bd607254bf3be6326" Feb 24 15:23:59 crc kubenswrapper[4982]: I0224 15:23:59.480054 4982 generic.go:334] "Generic (PLEG): container finished" podID="b7210463-d869-4f1c-8d7b-60985c525f58" containerID="b0007a3b98326845bdbf173ddb2bf8f019a020c336f384406823097ab51d5b6f" exitCode=0 Feb 24 15:23:59 crc kubenswrapper[4982]: I0224 15:23:59.480546 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" event={"ID":"b7210463-d869-4f1c-8d7b-60985c525f58","Type":"ContainerDied","Data":"b0007a3b98326845bdbf173ddb2bf8f019a020c336f384406823097ab51d5b6f"} Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.158708 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532444-szw64"] Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.162206 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532444-szw64" Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.166868 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.168751 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.170283 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.170915 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532444-szw64"] Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.293654 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8pfm\" (UniqueName: \"kubernetes.io/projected/96d9e5d0-53d6-456c-a8df-4ca258da91d4-kube-api-access-b8pfm\") pod \"auto-csr-approver-29532444-szw64\" (UID: \"96d9e5d0-53d6-456c-a8df-4ca258da91d4\") " pod="openshift-infra/auto-csr-approver-29532444-szw64" Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.399261 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8pfm\" (UniqueName: \"kubernetes.io/projected/96d9e5d0-53d6-456c-a8df-4ca258da91d4-kube-api-access-b8pfm\") pod \"auto-csr-approver-29532444-szw64\" (UID: \"96d9e5d0-53d6-456c-a8df-4ca258da91d4\") " pod="openshift-infra/auto-csr-approver-29532444-szw64" Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.430124 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8pfm\" (UniqueName: \"kubernetes.io/projected/96d9e5d0-53d6-456c-a8df-4ca258da91d4-kube-api-access-b8pfm\") pod \"auto-csr-approver-29532444-szw64\" (UID: \"96d9e5d0-53d6-456c-a8df-4ca258da91d4\") " pod="openshift-infra/auto-csr-approver-29532444-szw64" Feb 24 15:24:00 crc kubenswrapper[4982]: I0224 15:24:00.484653 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532444-szw64" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.042652 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.045633 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532444-szw64"] Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.129784 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.220296 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v52v\" (UniqueName: \"kubernetes.io/projected/b7210463-d869-4f1c-8d7b-60985c525f58-kube-api-access-4v52v\") pod \"b7210463-d869-4f1c-8d7b-60985c525f58\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.220625 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-ssh-key-openstack-edpm-ipam\") pod \"b7210463-d869-4f1c-8d7b-60985c525f58\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.220776 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-inventory\") pod \"b7210463-d869-4f1c-8d7b-60985c525f58\" (UID: \"b7210463-d869-4f1c-8d7b-60985c525f58\") " Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.225741 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7210463-d869-4f1c-8d7b-60985c525f58-kube-api-access-4v52v" (OuterVolumeSpecName: "kube-api-access-4v52v") pod "b7210463-d869-4f1c-8d7b-60985c525f58" (UID: "b7210463-d869-4f1c-8d7b-60985c525f58"). InnerVolumeSpecName "kube-api-access-4v52v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.256743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-inventory" (OuterVolumeSpecName: "inventory") pod "b7210463-d869-4f1c-8d7b-60985c525f58" (UID: "b7210463-d869-4f1c-8d7b-60985c525f58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.258130 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b7210463-d869-4f1c-8d7b-60985c525f58" (UID: "b7210463-d869-4f1c-8d7b-60985c525f58"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.327453 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.327532 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v52v\" (UniqueName: \"kubernetes.io/projected/b7210463-d869-4f1c-8d7b-60985c525f58-kube-api-access-4v52v\") on node \"crc\" DevicePath \"\"" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.327547 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7210463-d869-4f1c-8d7b-60985c525f58-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.521274 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532444-szw64" event={"ID":"96d9e5d0-53d6-456c-a8df-4ca258da91d4","Type":"ContainerStarted","Data":"3559569f7761f5a6fe7a20b9fd78d32705d186dfb05708dfbd2f3e1d44e5a0d5"} Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.529158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" event={"ID":"b7210463-d869-4f1c-8d7b-60985c525f58","Type":"ContainerDied","Data":"9df75937834245d9fd4d4e49364dd0b70f4b45d759846c29bffd08db81b052ce"} Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.529200 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df75937834245d9fd4d4e49364dd0b70f4b45d759846c29bffd08db81b052ce" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.529309 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.604627 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv"] Feb 24 15:24:01 crc kubenswrapper[4982]: E0224 15:24:01.605232 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7210463-d869-4f1c-8d7b-60985c525f58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.605252 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7210463-d869-4f1c-8d7b-60985c525f58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.605673 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7210463-d869-4f1c-8d7b-60985c525f58" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.606835 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.612465 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.613382 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.613472 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.613393 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.618792 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv"] Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.742745 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.742861 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjcg6\" (UniqueName: \"kubernetes.io/projected/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-kube-api-access-pjcg6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.742921 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.845035 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.845135 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjcg6\" (UniqueName: \"kubernetes.io/projected/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-kube-api-access-pjcg6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.845204 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.852726 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.853768 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.864770 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjcg6\" (UniqueName: \"kubernetes.io/projected/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-kube-api-access-pjcg6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:01 crc kubenswrapper[4982]: I0224 15:24:01.934223 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:24:02 crc kubenswrapper[4982]: I0224 15:24:02.551458 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532444-szw64" event={"ID":"96d9e5d0-53d6-456c-a8df-4ca258da91d4","Type":"ContainerStarted","Data":"7ceb18a9bf81e8ba0c93dc02318c0f9940411705363a232a5042fc3392ba4e39"} Feb 24 15:24:02 crc kubenswrapper[4982]: I0224 15:24:02.557904 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv"] Feb 24 15:24:02 crc kubenswrapper[4982]: I0224 15:24:02.581787 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532444-szw64" podStartSLOduration=1.566406686 podStartE2EDuration="2.581765898s" podCreationTimestamp="2026-02-24 15:24:00 +0000 UTC" firstStartedPulling="2026-02-24 15:24:01.04235154 +0000 UTC m=+2102.661410033" lastFinishedPulling="2026-02-24 15:24:02.057710742 +0000 UTC m=+2103.676769245" observedRunningTime="2026-02-24 15:24:02.568740025 +0000 UTC m=+2104.187798518" watchObservedRunningTime="2026-02-24 15:24:02.581765898 +0000 UTC m=+2104.200824391" Feb 24 15:24:03 crc kubenswrapper[4982]: I0224 15:24:03.563712 4982 generic.go:334] "Generic (PLEG): container finished" podID="96d9e5d0-53d6-456c-a8df-4ca258da91d4" containerID="7ceb18a9bf81e8ba0c93dc02318c0f9940411705363a232a5042fc3392ba4e39" exitCode=0 Feb 24 15:24:03 crc kubenswrapper[4982]: I0224 15:24:03.563819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532444-szw64" event={"ID":"96d9e5d0-53d6-456c-a8df-4ca258da91d4","Type":"ContainerDied","Data":"7ceb18a9bf81e8ba0c93dc02318c0f9940411705363a232a5042fc3392ba4e39"} Feb 24 15:24:03 crc kubenswrapper[4982]: I0224 15:24:03.566267 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" event={"ID":"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5","Type":"ContainerStarted","Data":"63f7758e66e67501fcf923fcecacb569c11160041e787f2181c1950533c2c021"} Feb 24 15:24:03 crc kubenswrapper[4982]: I0224 15:24:03.566300 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" event={"ID":"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5","Type":"ContainerStarted","Data":"a5372c97206c8e50b79cc6fecb26072b2b923e7cbf5f5d0ddd6da3b951f4aed5"} Feb 24 15:24:03 crc kubenswrapper[4982]: I0224 15:24:03.639548 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" podStartSLOduration=2.179674089 podStartE2EDuration="2.639518812s" podCreationTimestamp="2026-02-24 15:24:01 +0000 UTC" firstStartedPulling="2026-02-24 15:24:02.581677206 +0000 UTC m=+2104.200735699" lastFinishedPulling="2026-02-24 15:24:03.041521919 +0000 UTC m=+2104.660580422" observedRunningTime="2026-02-24 15:24:03.61992154 +0000 UTC m=+2105.238980043" watchObservedRunningTime="2026-02-24 15:24:03.639518812 +0000 UTC m=+2105.258577315" Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.094652 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532444-szw64" Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.251633 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8pfm\" (UniqueName: \"kubernetes.io/projected/96d9e5d0-53d6-456c-a8df-4ca258da91d4-kube-api-access-b8pfm\") pod \"96d9e5d0-53d6-456c-a8df-4ca258da91d4\" (UID: \"96d9e5d0-53d6-456c-a8df-4ca258da91d4\") " Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.258259 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d9e5d0-53d6-456c-a8df-4ca258da91d4-kube-api-access-b8pfm" (OuterVolumeSpecName: "kube-api-access-b8pfm") pod "96d9e5d0-53d6-456c-a8df-4ca258da91d4" (UID: "96d9e5d0-53d6-456c-a8df-4ca258da91d4"). InnerVolumeSpecName "kube-api-access-b8pfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.355999 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8pfm\" (UniqueName: \"kubernetes.io/projected/96d9e5d0-53d6-456c-a8df-4ca258da91d4-kube-api-access-b8pfm\") on node \"crc\" DevicePath \"\"" Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.601227 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532444-szw64" event={"ID":"96d9e5d0-53d6-456c-a8df-4ca258da91d4","Type":"ContainerDied","Data":"3559569f7761f5a6fe7a20b9fd78d32705d186dfb05708dfbd2f3e1d44e5a0d5"} Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.601268 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3559569f7761f5a6fe7a20b9fd78d32705d186dfb05708dfbd2f3e1d44e5a0d5" Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.601314 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532444-szw64" Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.661619 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532438-bk652"] Feb 24 15:24:05 crc kubenswrapper[4982]: I0224 15:24:05.675251 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532438-bk652"] Feb 24 15:24:07 crc kubenswrapper[4982]: I0224 15:24:07.167438 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1f1796-cccb-4a17-b169-e3240d7e884d" path="/var/lib/kubelet/pods/7d1f1796-cccb-4a17-b169-e3240d7e884d/volumes" Feb 24 15:24:16 crc kubenswrapper[4982]: I0224 15:24:16.047667 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7dmfp"] Feb 24 15:24:16 crc kubenswrapper[4982]: I0224 15:24:16.067747 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7dmfp"] Feb 24 15:24:17 crc kubenswrapper[4982]: I0224 15:24:17.161528 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d6a585f-95fd-47f7-a809-cceee8a3644a" path="/var/lib/kubelet/pods/9d6a585f-95fd-47f7-a809-cceee8a3644a/volumes" Feb 24 15:24:46 crc kubenswrapper[4982]: I0224 15:24:46.123439 4982 scope.go:117] "RemoveContainer" containerID="bb0725cb36d0055b514d3b955515b8d9f0084a49e013ea563ebafe325cc20b2c" Feb 24 15:24:46 crc kubenswrapper[4982]: I0224 15:24:46.191320 4982 scope.go:117] "RemoveContainer" containerID="c17b59326500174f6a2d0f9566794193490e9a65eb23870cbc37de0748469c6f" Feb 24 15:25:05 crc kubenswrapper[4982]: I0224 15:25:05.339917 4982 generic.go:334] "Generic (PLEG): container finished" podID="56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5" containerID="63f7758e66e67501fcf923fcecacb569c11160041e787f2181c1950533c2c021" exitCode=0 Feb 24 15:25:05 crc kubenswrapper[4982]: I0224 15:25:05.340124 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" event={"ID":"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5","Type":"ContainerDied","Data":"63f7758e66e67501fcf923fcecacb569c11160041e787f2181c1950533c2c021"} Feb 24 15:25:06 crc kubenswrapper[4982]: I0224 15:25:06.835297 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.019370 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjcg6\" (UniqueName: \"kubernetes.io/projected/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-kube-api-access-pjcg6\") pod \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.019550 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-ssh-key-openstack-edpm-ipam\") pod \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.019598 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-inventory\") pod \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\" (UID: \"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5\") " Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.025372 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-kube-api-access-pjcg6" (OuterVolumeSpecName: "kube-api-access-pjcg6") pod "56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5" (UID: "56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5"). InnerVolumeSpecName "kube-api-access-pjcg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.058307 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-inventory" (OuterVolumeSpecName: "inventory") pod "56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5" (UID: "56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.069644 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5" (UID: "56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.123253 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjcg6\" (UniqueName: \"kubernetes.io/projected/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-kube-api-access-pjcg6\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.123292 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.123303 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.366026 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" event={"ID":"56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5","Type":"ContainerDied","Data":"a5372c97206c8e50b79cc6fecb26072b2b923e7cbf5f5d0ddd6da3b951f4aed5"} Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.366073 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5372c97206c8e50b79cc6fecb26072b2b923e7cbf5f5d0ddd6da3b951f4aed5" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.366090 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.467102 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk"] Feb 24 15:25:07 crc kubenswrapper[4982]: E0224 15:25:07.467663 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.467689 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:07 crc kubenswrapper[4982]: E0224 15:25:07.467709 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d9e5d0-53d6-456c-a8df-4ca258da91d4" containerName="oc" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.467717 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d9e5d0-53d6-456c-a8df-4ca258da91d4" containerName="oc" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.468291 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d9e5d0-53d6-456c-a8df-4ca258da91d4" containerName="oc" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.468342 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.469364 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.473203 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.473462 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.473559 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.475152 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.490719 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk"] Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.636557 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.637244 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jhdj\" (UniqueName: \"kubernetes.io/projected/514b9bdd-6644-4170-bd57-2c6b1073b9cb-kube-api-access-2jhdj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.637389 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.739590 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.739824 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jhdj\" (UniqueName: \"kubernetes.io/projected/514b9bdd-6644-4170-bd57-2c6b1073b9cb-kube-api-access-2jhdj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.739881 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.748146 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.750249 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.756309 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jhdj\" (UniqueName: \"kubernetes.io/projected/514b9bdd-6644-4170-bd57-2c6b1073b9cb-kube-api-access-2jhdj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:07 crc kubenswrapper[4982]: I0224 15:25:07.797604 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:08 crc kubenswrapper[4982]: I0224 15:25:08.429920 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk"] Feb 24 15:25:09 crc kubenswrapper[4982]: I0224 15:25:09.390850 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" event={"ID":"514b9bdd-6644-4170-bd57-2c6b1073b9cb","Type":"ContainerStarted","Data":"c0e09ff522e1b987eb276c293626af8b1045ab485f23acbded565b3661549fb2"} Feb 24 15:25:09 crc kubenswrapper[4982]: I0224 15:25:09.391120 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" event={"ID":"514b9bdd-6644-4170-bd57-2c6b1073b9cb","Type":"ContainerStarted","Data":"8d6511e95e8570890c3c57172ee972c3f7e07e043d3f684d845fd478357d036c"} Feb 24 15:25:09 crc kubenswrapper[4982]: I0224 15:25:09.422030 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" podStartSLOduration=1.767452501 podStartE2EDuration="2.42200282s" podCreationTimestamp="2026-02-24 15:25:07 +0000 UTC" firstStartedPulling="2026-02-24 15:25:08.431137821 +0000 UTC m=+2170.050196314" lastFinishedPulling="2026-02-24 15:25:09.08568815 +0000 UTC m=+2170.704746633" observedRunningTime="2026-02-24 15:25:09.412180093 +0000 UTC m=+2171.031238636" watchObservedRunningTime="2026-02-24 15:25:09.42200282 +0000 UTC m=+2171.041061323" Feb 24 15:25:13 crc kubenswrapper[4982]: I0224 15:25:13.064474 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-756sh"] Feb 24 15:25:13 crc kubenswrapper[4982]: I0224 15:25:13.079680 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-756sh"] Feb 24 15:25:13 crc kubenswrapper[4982]: I0224 15:25:13.161319 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30138906-e9e1-46d1-87e1-fd842efdf3ed" path="/var/lib/kubelet/pods/30138906-e9e1-46d1-87e1-fd842efdf3ed/volumes" Feb 24 15:25:14 crc kubenswrapper[4982]: I0224 15:25:14.044412 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-63de-account-create-update-9mg92"] Feb 24 15:25:14 crc kubenswrapper[4982]: I0224 15:25:14.059698 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jchcv"] Feb 24 15:25:14 crc kubenswrapper[4982]: I0224 15:25:14.069568 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-63de-account-create-update-9mg92"] Feb 24 15:25:14 crc kubenswrapper[4982]: I0224 15:25:14.082286 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jchcv"] Feb 24 15:25:14 crc kubenswrapper[4982]: I0224 15:25:14.448163 4982 generic.go:334] "Generic (PLEG): container finished" podID="514b9bdd-6644-4170-bd57-2c6b1073b9cb" containerID="c0e09ff522e1b987eb276c293626af8b1045ab485f23acbded565b3661549fb2" exitCode=0 Feb 24 15:25:14 crc kubenswrapper[4982]: I0224 15:25:14.448224 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" event={"ID":"514b9bdd-6644-4170-bd57-2c6b1073b9cb","Type":"ContainerDied","Data":"c0e09ff522e1b987eb276c293626af8b1045ab485f23acbded565b3661549fb2"} Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.043410 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dzs7v"] Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.055344 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1073-account-create-update-75xhn"] Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.071237 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1073-account-create-update-75xhn"] Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.081266 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dzs7v"] Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.092935 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-74e2-account-create-update-krmpd"] Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.106638 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-74e2-account-create-update-krmpd"] Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.158829 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0517f81d-f724-4803-9e98-85d228c39b2f" path="/var/lib/kubelet/pods/0517f81d-f724-4803-9e98-85d228c39b2f/volumes" Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.159433 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f7297d-6126-40bc-b5b4-54d01f6cb255" path="/var/lib/kubelet/pods/62f7297d-6126-40bc-b5b4-54d01f6cb255/volumes" Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.160174 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e38160-7267-4499-85f3-e05aeff196dc" path="/var/lib/kubelet/pods/65e38160-7267-4499-85f3-e05aeff196dc/volumes" Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.160798 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bdcc0e1-05fe-45a5-929e-eba045439850" path="/var/lib/kubelet/pods/7bdcc0e1-05fe-45a5-929e-eba045439850/volumes" Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.162628 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df39a573-272f-4a7b-a6f8-751f8b2c0dd9" path="/var/lib/kubelet/pods/df39a573-272f-4a7b-a6f8-751f8b2c0dd9/volumes" Feb 24 15:25:15 crc kubenswrapper[4982]: I0224 15:25:15.997809 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.077751 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-inventory\") pod \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.077889 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-ssh-key-openstack-edpm-ipam\") pod \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.078211 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jhdj\" (UniqueName: \"kubernetes.io/projected/514b9bdd-6644-4170-bd57-2c6b1073b9cb-kube-api-access-2jhdj\") pod \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\" (UID: \"514b9bdd-6644-4170-bd57-2c6b1073b9cb\") " Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.100196 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514b9bdd-6644-4170-bd57-2c6b1073b9cb-kube-api-access-2jhdj" (OuterVolumeSpecName: "kube-api-access-2jhdj") pod "514b9bdd-6644-4170-bd57-2c6b1073b9cb" (UID: "514b9bdd-6644-4170-bd57-2c6b1073b9cb"). InnerVolumeSpecName "kube-api-access-2jhdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.126455 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "514b9bdd-6644-4170-bd57-2c6b1073b9cb" (UID: "514b9bdd-6644-4170-bd57-2c6b1073b9cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.157408 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-inventory" (OuterVolumeSpecName: "inventory") pod "514b9bdd-6644-4170-bd57-2c6b1073b9cb" (UID: "514b9bdd-6644-4170-bd57-2c6b1073b9cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.185172 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jhdj\" (UniqueName: \"kubernetes.io/projected/514b9bdd-6644-4170-bd57-2c6b1073b9cb-kube-api-access-2jhdj\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.185207 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.185218 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514b9bdd-6644-4170-bd57-2c6b1073b9cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.491807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" event={"ID":"514b9bdd-6644-4170-bd57-2c6b1073b9cb","Type":"ContainerDied","Data":"8d6511e95e8570890c3c57172ee972c3f7e07e043d3f684d845fd478357d036c"} Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.491877 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6511e95e8570890c3c57172ee972c3f7e07e043d3f684d845fd478357d036c" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.491984 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.625736 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl"] Feb 24 15:25:16 crc kubenswrapper[4982]: E0224 15:25:16.626276 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514b9bdd-6644-4170-bd57-2c6b1073b9cb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.626300 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="514b9bdd-6644-4170-bd57-2c6b1073b9cb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.626586 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="514b9bdd-6644-4170-bd57-2c6b1073b9cb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.627570 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.630605 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.630615 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.631315 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.631453 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.654666 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl"] Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.699189 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.699319 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdl9\" (UniqueName: \"kubernetes.io/projected/1b676930-50c4-4caa-aeba-85814ae02e3a-kube-api-access-sqdl9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.699383 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.801223 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.801404 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqdl9\" (UniqueName: \"kubernetes.io/projected/1b676930-50c4-4caa-aeba-85814ae02e3a-kube-api-access-sqdl9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.801480 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.805575 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.805675 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.817315 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqdl9\" (UniqueName: \"kubernetes.io/projected/1b676930-50c4-4caa-aeba-85814ae02e3a-kube-api-access-sqdl9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2zfwl\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:16 crc kubenswrapper[4982]: I0224 15:25:16.947771 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:17 crc kubenswrapper[4982]: I0224 15:25:17.542064 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl"] Feb 24 15:25:18 crc kubenswrapper[4982]: I0224 15:25:18.527368 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" event={"ID":"1b676930-50c4-4caa-aeba-85814ae02e3a","Type":"ContainerStarted","Data":"f6086716891e34eb8687ab7434d4fcacaffa8c628c5921a0bab9afe81a35055d"} Feb 24 15:25:18 crc kubenswrapper[4982]: I0224 15:25:18.527811 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" event={"ID":"1b676930-50c4-4caa-aeba-85814ae02e3a","Type":"ContainerStarted","Data":"4abf7e0ed36732762bd3de2c6e64c0e284f09ecd5f9431762302e522c949b7a9"} Feb 24 15:25:18 crc kubenswrapper[4982]: I0224 15:25:18.584243 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" podStartSLOduration=2.106020558 podStartE2EDuration="2.584226599s" podCreationTimestamp="2026-02-24 15:25:16 +0000 UTC" firstStartedPulling="2026-02-24 15:25:17.543008504 +0000 UTC m=+2179.162066997" lastFinishedPulling="2026-02-24 15:25:18.021214545 +0000 UTC m=+2179.640273038" observedRunningTime="2026-02-24 15:25:18.572378637 +0000 UTC m=+2180.191437180" watchObservedRunningTime="2026-02-24 15:25:18.584226599 +0000 UTC m=+2180.203285092" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.014053 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rx4ws"] Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.018092 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.031389 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rx4ws"] Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.161295 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-catalog-content\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.161449 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-utilities\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.161521 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxgvg\" (UniqueName: \"kubernetes.io/projected/927cc471-c78f-41f4-ae94-7349b5f45215-kube-api-access-jxgvg\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.264092 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-utilities\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.264466 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxgvg\" (UniqueName: \"kubernetes.io/projected/927cc471-c78f-41f4-ae94-7349b5f45215-kube-api-access-jxgvg\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.264679 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-catalog-content\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.264672 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-utilities\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.265071 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-catalog-content\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.292575 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxgvg\" (UniqueName: \"kubernetes.io/projected/927cc471-c78f-41f4-ae94-7349b5f45215-kube-api-access-jxgvg\") pod \"community-operators-rx4ws\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.347513 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:34 crc kubenswrapper[4982]: I0224 15:25:34.922452 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rx4ws"] Feb 24 15:25:35 crc kubenswrapper[4982]: I0224 15:25:35.726661 4982 generic.go:334] "Generic (PLEG): container finished" podID="927cc471-c78f-41f4-ae94-7349b5f45215" containerID="4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4" exitCode=0 Feb 24 15:25:35 crc kubenswrapper[4982]: I0224 15:25:35.726894 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx4ws" event={"ID":"927cc471-c78f-41f4-ae94-7349b5f45215","Type":"ContainerDied","Data":"4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4"} Feb 24 15:25:35 crc kubenswrapper[4982]: I0224 15:25:35.728245 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx4ws" event={"ID":"927cc471-c78f-41f4-ae94-7349b5f45215","Type":"ContainerStarted","Data":"5ac0a16684703fe71d9832b601185cddf4a5b90a1de92b342b8277a889400e3e"} Feb 24 15:25:37 crc kubenswrapper[4982]: I0224 15:25:37.761684 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx4ws" event={"ID":"927cc471-c78f-41f4-ae94-7349b5f45215","Type":"ContainerStarted","Data":"8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a"} Feb 24 15:25:38 crc kubenswrapper[4982]: I0224 15:25:38.737817 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:25:38 crc kubenswrapper[4982]: I0224 15:25:38.737898 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:25:39 crc kubenswrapper[4982]: I0224 15:25:39.787198 4982 generic.go:334] "Generic (PLEG): container finished" podID="927cc471-c78f-41f4-ae94-7349b5f45215" containerID="8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a" exitCode=0 Feb 24 15:25:39 crc kubenswrapper[4982]: I0224 15:25:39.787307 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx4ws" event={"ID":"927cc471-c78f-41f4-ae94-7349b5f45215","Type":"ContainerDied","Data":"8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a"} Feb 24 15:25:40 crc kubenswrapper[4982]: I0224 15:25:40.799385 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx4ws" event={"ID":"927cc471-c78f-41f4-ae94-7349b5f45215","Type":"ContainerStarted","Data":"f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00"} Feb 24 15:25:40 crc kubenswrapper[4982]: I0224 15:25:40.825145 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rx4ws" podStartSLOduration=3.336097541 podStartE2EDuration="7.825122381s" podCreationTimestamp="2026-02-24 15:25:33 +0000 UTC" firstStartedPulling="2026-02-24 15:25:35.730720577 +0000 UTC m=+2197.349779070" lastFinishedPulling="2026-02-24 15:25:40.219745417 +0000 UTC m=+2201.838803910" observedRunningTime="2026-02-24 15:25:40.815800278 +0000 UTC m=+2202.434858781" watchObservedRunningTime="2026-02-24 15:25:40.825122381 +0000 UTC m=+2202.444180894" Feb 24 15:25:44 crc kubenswrapper[4982]: I0224 15:25:44.347860 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:44 crc kubenswrapper[4982]: I0224 15:25:44.348471 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:44 crc kubenswrapper[4982]: I0224 15:25:44.429077 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:46 crc kubenswrapper[4982]: I0224 15:25:46.335654 4982 scope.go:117] "RemoveContainer" containerID="d2e0c4920d696de3b11aae5d7cef578ebe839dcddba26e0a3883f0b9cb36f857" Feb 24 15:25:46 crc kubenswrapper[4982]: I0224 15:25:46.364319 4982 scope.go:117] "RemoveContainer" containerID="6c1b068874aae73d0ee7716d047087a5a7b64dcf8d79f6b590725466c7db9bfa" Feb 24 15:25:46 crc kubenswrapper[4982]: I0224 15:25:46.420915 4982 scope.go:117] "RemoveContainer" containerID="415fbec6a8c2fe97df97eec423d061b4573498a064e91032b4bd3fb146ef00a7" Feb 24 15:25:46 crc kubenswrapper[4982]: I0224 15:25:46.477286 4982 scope.go:117] "RemoveContainer" containerID="dc08518e95385eecca9fc0ea8d6fee5b91646461bbce5916d3534f64b9c2bc8f" Feb 24 15:25:46 crc kubenswrapper[4982]: I0224 15:25:46.539032 4982 scope.go:117] "RemoveContainer" containerID="688ab5fa56b17793d2d349b32f81ba4e6a6f6c9db71753db6c6313e2122a01d3" Feb 24 15:25:46 crc kubenswrapper[4982]: I0224 15:25:46.578740 4982 scope.go:117] "RemoveContainer" containerID="042b6dd31c04595c351e9f6fa6b9f93d255513ce63973ef5f651731d0336130b" Feb 24 15:25:52 crc kubenswrapper[4982]: I0224 15:25:52.950574 4982 generic.go:334] "Generic (PLEG): container finished" podID="1b676930-50c4-4caa-aeba-85814ae02e3a" containerID="f6086716891e34eb8687ab7434d4fcacaffa8c628c5921a0bab9afe81a35055d" exitCode=0 Feb 24 15:25:52 crc kubenswrapper[4982]: I0224 15:25:52.950952 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" event={"ID":"1b676930-50c4-4caa-aeba-85814ae02e3a","Type":"ContainerDied","Data":"f6086716891e34eb8687ab7434d4fcacaffa8c628c5921a0bab9afe81a35055d"} Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.056335 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gp5wp"] Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.079541 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gp5wp"] Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.404765 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.469595 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rx4ws"] Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.485341 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.624514 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-inventory\") pod \"1b676930-50c4-4caa-aeba-85814ae02e3a\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.624987 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqdl9\" (UniqueName: \"kubernetes.io/projected/1b676930-50c4-4caa-aeba-85814ae02e3a-kube-api-access-sqdl9\") pod \"1b676930-50c4-4caa-aeba-85814ae02e3a\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.625629 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-ssh-key-openstack-edpm-ipam\") pod \"1b676930-50c4-4caa-aeba-85814ae02e3a\" (UID: \"1b676930-50c4-4caa-aeba-85814ae02e3a\") " Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.634322 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b676930-50c4-4caa-aeba-85814ae02e3a-kube-api-access-sqdl9" (OuterVolumeSpecName: "kube-api-access-sqdl9") pod "1b676930-50c4-4caa-aeba-85814ae02e3a" (UID: "1b676930-50c4-4caa-aeba-85814ae02e3a"). InnerVolumeSpecName "kube-api-access-sqdl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.668907 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b676930-50c4-4caa-aeba-85814ae02e3a" (UID: "1b676930-50c4-4caa-aeba-85814ae02e3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.669100 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-inventory" (OuterVolumeSpecName: "inventory") pod "1b676930-50c4-4caa-aeba-85814ae02e3a" (UID: "1b676930-50c4-4caa-aeba-85814ae02e3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.728708 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqdl9\" (UniqueName: \"kubernetes.io/projected/1b676930-50c4-4caa-aeba-85814ae02e3a-kube-api-access-sqdl9\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.728791 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.728807 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b676930-50c4-4caa-aeba-85814ae02e3a-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.973254 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.973238 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2zfwl" event={"ID":"1b676930-50c4-4caa-aeba-85814ae02e3a","Type":"ContainerDied","Data":"4abf7e0ed36732762bd3de2c6e64c0e284f09ecd5f9431762302e522c949b7a9"} Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.973314 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4abf7e0ed36732762bd3de2c6e64c0e284f09ecd5f9431762302e522c949b7a9" Feb 24 15:25:54 crc kubenswrapper[4982]: I0224 15:25:54.973391 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rx4ws" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" containerName="registry-server" containerID="cri-o://f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00" gracePeriod=2 Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.089965 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g"] Feb 24 15:25:55 crc kubenswrapper[4982]: E0224 15:25:55.090712 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b676930-50c4-4caa-aeba-85814ae02e3a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.090731 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b676930-50c4-4caa-aeba-85814ae02e3a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.105051 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b676930-50c4-4caa-aeba-85814ae02e3a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.106137 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g"] Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.106290 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.110348 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.110388 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.110557 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.114274 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.158630 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a27d742e-50bb-470c-a3af-8bd17beaca37" path="/var/lib/kubelet/pods/a27d742e-50bb-470c-a3af-8bd17beaca37/volumes" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.241015 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.241479 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2nh\" (UniqueName: \"kubernetes.io/projected/17fad281-67b9-48ec-b6a3-82ecd730d659-kube-api-access-jd2nh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.241570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.343781 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd2nh\" (UniqueName: \"kubernetes.io/projected/17fad281-67b9-48ec-b6a3-82ecd730d659-kube-api-access-jd2nh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.343905 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.344162 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.357524 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.358173 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.361927 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd2nh\" (UniqueName: \"kubernetes.io/projected/17fad281-67b9-48ec-b6a3-82ecd730d659-kube-api-access-jd2nh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.456317 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.578453 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.754342 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxgvg\" (UniqueName: \"kubernetes.io/projected/927cc471-c78f-41f4-ae94-7349b5f45215-kube-api-access-jxgvg\") pod \"927cc471-c78f-41f4-ae94-7349b5f45215\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.754721 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-catalog-content\") pod \"927cc471-c78f-41f4-ae94-7349b5f45215\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.754787 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-utilities\") pod \"927cc471-c78f-41f4-ae94-7349b5f45215\" (UID: \"927cc471-c78f-41f4-ae94-7349b5f45215\") " Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.755627 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-utilities" (OuterVolumeSpecName: "utilities") pod "927cc471-c78f-41f4-ae94-7349b5f45215" (UID: "927cc471-c78f-41f4-ae94-7349b5f45215"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.764001 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927cc471-c78f-41f4-ae94-7349b5f45215-kube-api-access-jxgvg" (OuterVolumeSpecName: "kube-api-access-jxgvg") pod "927cc471-c78f-41f4-ae94-7349b5f45215" (UID: "927cc471-c78f-41f4-ae94-7349b5f45215"). InnerVolumeSpecName "kube-api-access-jxgvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.819817 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "927cc471-c78f-41f4-ae94-7349b5f45215" (UID: "927cc471-c78f-41f4-ae94-7349b5f45215"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.857897 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.858101 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927cc471-c78f-41f4-ae94-7349b5f45215-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.858113 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxgvg\" (UniqueName: \"kubernetes.io/projected/927cc471-c78f-41f4-ae94-7349b5f45215-kube-api-access-jxgvg\") on node \"crc\" DevicePath \"\"" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.984382 4982 generic.go:334] "Generic (PLEG): container finished" podID="927cc471-c78f-41f4-ae94-7349b5f45215" containerID="f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00" exitCode=0 Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.984422 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx4ws" event={"ID":"927cc471-c78f-41f4-ae94-7349b5f45215","Type":"ContainerDied","Data":"f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00"} Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.984446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx4ws" event={"ID":"927cc471-c78f-41f4-ae94-7349b5f45215","Type":"ContainerDied","Data":"5ac0a16684703fe71d9832b601185cddf4a5b90a1de92b342b8277a889400e3e"} Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.984464 4982 scope.go:117] "RemoveContainer" containerID="f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00" Feb 24 15:25:55 crc kubenswrapper[4982]: I0224 15:25:55.984608 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rx4ws" Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.024453 4982 scope.go:117] "RemoveContainer" containerID="8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a" Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.025102 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g"] Feb 24 15:25:56 crc kubenswrapper[4982]: W0224 15:25:56.033409 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17fad281_67b9_48ec_b6a3_82ecd730d659.slice/crio-a1041277e4eeefdacd44df2f2c343721b26f124fab326ee7ba3b7ce15eb3257a WatchSource:0}: Error finding container a1041277e4eeefdacd44df2f2c343721b26f124fab326ee7ba3b7ce15eb3257a: Status 404 returned error can't find the container with id a1041277e4eeefdacd44df2f2c343721b26f124fab326ee7ba3b7ce15eb3257a Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.043307 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rx4ws"] Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.053904 4982 scope.go:117] "RemoveContainer" containerID="4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4" Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.055545 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rx4ws"] Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.080794 4982 scope.go:117] "RemoveContainer" containerID="f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00" Feb 24 15:25:56 crc kubenswrapper[4982]: E0224 15:25:56.082197 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00\": container with ID starting with f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00 not found: ID does not exist" containerID="f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00" Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.082252 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00"} err="failed to get container status \"f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00\": rpc error: code = NotFound desc = could not find container \"f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00\": container with ID starting with f41b0258bebd5722050f63f35058c2d36c1f5d853d3d281d1e87a2354c55cd00 not found: ID does not exist" Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.082285 4982 scope.go:117] "RemoveContainer" containerID="8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a" Feb 24 15:25:56 crc kubenswrapper[4982]: E0224 15:25:56.083806 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a\": container with ID starting with 8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a not found: ID does not exist" containerID="8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a" Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.083861 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a"} err="failed to get container status \"8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a\": rpc error: code = NotFound desc = could not find container \"8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a\": container with ID starting with 8f910225ccc38a9f67cdc42caed2fb4dbba26f686752c5b66cb413fce75b704a not found: ID does not exist" Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.083921 4982 scope.go:117] "RemoveContainer" containerID="4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4" Feb 24 15:25:56 crc kubenswrapper[4982]: E0224 15:25:56.084255 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4\": container with ID starting with 4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4 not found: ID does not exist" containerID="4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4" Feb 24 15:25:56 crc kubenswrapper[4982]: I0224 15:25:56.084292 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4"} err="failed to get container status \"4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4\": rpc error: code = NotFound desc = could not find container \"4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4\": container with ID starting with 4b388783f41ed8c95674aab9f2424821c0f9c742a23ae85c4633cf6f4b5f47d4 not found: ID does not exist" Feb 24 15:25:57 crc kubenswrapper[4982]: I0224 15:25:57.001778 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" event={"ID":"17fad281-67b9-48ec-b6a3-82ecd730d659","Type":"ContainerStarted","Data":"a1041277e4eeefdacd44df2f2c343721b26f124fab326ee7ba3b7ce15eb3257a"} Feb 24 15:25:57 crc kubenswrapper[4982]: I0224 15:25:57.157565 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" path="/var/lib/kubelet/pods/927cc471-c78f-41f4-ae94-7349b5f45215/volumes" Feb 24 15:25:58 crc kubenswrapper[4982]: I0224 15:25:58.016408 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" event={"ID":"17fad281-67b9-48ec-b6a3-82ecd730d659","Type":"ContainerStarted","Data":"103c22e29a0f7e393396c684e71e42a2ea3602c5ade9cbc5eec9ce2c565cf181"} Feb 24 15:25:58 crc kubenswrapper[4982]: I0224 15:25:58.032093 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" podStartSLOduration=2.331074463 podStartE2EDuration="3.032065172s" podCreationTimestamp="2026-02-24 15:25:55 +0000 UTC" firstStartedPulling="2026-02-24 15:25:56.054133549 +0000 UTC m=+2217.673192042" lastFinishedPulling="2026-02-24 15:25:56.755124218 +0000 UTC m=+2218.374182751" observedRunningTime="2026-02-24 15:25:58.030201612 +0000 UTC m=+2219.649260115" watchObservedRunningTime="2026-02-24 15:25:58.032065172 +0000 UTC m=+2219.651123695" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.139268 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532446-qjtj2"] Feb 24 15:26:00 crc kubenswrapper[4982]: E0224 15:26:00.140536 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" containerName="extract-utilities" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.140744 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" containerName="extract-utilities" Feb 24 15:26:00 crc kubenswrapper[4982]: E0224 15:26:00.140777 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" containerName="registry-server" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.140784 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" containerName="registry-server" Feb 24 15:26:00 crc kubenswrapper[4982]: E0224 15:26:00.140820 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" containerName="extract-content" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.140826 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" containerName="extract-content" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.141061 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="927cc471-c78f-41f4-ae94-7349b5f45215" containerName="registry-server" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.142089 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532446-qjtj2" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.144063 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.148152 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.148358 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.154968 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532446-qjtj2"] Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.274992 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trjk4\" (UniqueName: \"kubernetes.io/projected/4d9795c7-e101-4593-a414-c64dc4b42f83-kube-api-access-trjk4\") pod \"auto-csr-approver-29532446-qjtj2\" (UID: \"4d9795c7-e101-4593-a414-c64dc4b42f83\") " pod="openshift-infra/auto-csr-approver-29532446-qjtj2" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.377421 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trjk4\" (UniqueName: \"kubernetes.io/projected/4d9795c7-e101-4593-a414-c64dc4b42f83-kube-api-access-trjk4\") pod \"auto-csr-approver-29532446-qjtj2\" (UID: \"4d9795c7-e101-4593-a414-c64dc4b42f83\") " pod="openshift-infra/auto-csr-approver-29532446-qjtj2" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.404659 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trjk4\" (UniqueName: \"kubernetes.io/projected/4d9795c7-e101-4593-a414-c64dc4b42f83-kube-api-access-trjk4\") pod \"auto-csr-approver-29532446-qjtj2\" (UID: \"4d9795c7-e101-4593-a414-c64dc4b42f83\") " pod="openshift-infra/auto-csr-approver-29532446-qjtj2" Feb 24 15:26:00 crc kubenswrapper[4982]: I0224 15:26:00.501433 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532446-qjtj2" Feb 24 15:26:01 crc kubenswrapper[4982]: I0224 15:26:01.029050 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532446-qjtj2"] Feb 24 15:26:01 crc kubenswrapper[4982]: I0224 15:26:01.051284 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532446-qjtj2" event={"ID":"4d9795c7-e101-4593-a414-c64dc4b42f83","Type":"ContainerStarted","Data":"283ff69ae604543d1d375d4549a1817ea39838cbc3f1fcd48ffd729c42c406d6"} Feb 24 15:26:02 crc kubenswrapper[4982]: I0224 15:26:02.041551 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-fjb7l"] Feb 24 15:26:02 crc kubenswrapper[4982]: I0224 15:26:02.053875 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-fjb7l"] Feb 24 15:26:02 crc kubenswrapper[4982]: I0224 15:26:02.066672 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-296f-account-create-update-tkrxl"] Feb 24 15:26:02 crc kubenswrapper[4982]: I0224 15:26:02.081959 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-296f-account-create-update-tkrxl"] Feb 24 15:26:03 crc kubenswrapper[4982]: I0224 15:26:03.159103 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c574c2e-47b9-43c8-b78d-8a10566c717a" path="/var/lib/kubelet/pods/4c574c2e-47b9-43c8-b78d-8a10566c717a/volumes" Feb 24 15:26:03 crc kubenswrapper[4982]: I0224 15:26:03.160079 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62cdf76f-7239-423a-93cf-16b6c38c3525" path="/var/lib/kubelet/pods/62cdf76f-7239-423a-93cf-16b6c38c3525/volumes" Feb 24 15:26:07 crc kubenswrapper[4982]: I0224 15:26:07.129863 4982 generic.go:334] "Generic (PLEG): container finished" podID="4d9795c7-e101-4593-a414-c64dc4b42f83" containerID="5a7e79db896a013ac9438305259c410c7083af313c23228d517fef4ba91fa589" exitCode=0 Feb 24 15:26:07 crc kubenswrapper[4982]: I0224 15:26:07.129939 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532446-qjtj2" event={"ID":"4d9795c7-e101-4593-a414-c64dc4b42f83","Type":"ContainerDied","Data":"5a7e79db896a013ac9438305259c410c7083af313c23228d517fef4ba91fa589"} Feb 24 15:26:08 crc kubenswrapper[4982]: I0224 15:26:08.665394 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532446-qjtj2" Feb 24 15:26:08 crc kubenswrapper[4982]: I0224 15:26:08.702909 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trjk4\" (UniqueName: \"kubernetes.io/projected/4d9795c7-e101-4593-a414-c64dc4b42f83-kube-api-access-trjk4\") pod \"4d9795c7-e101-4593-a414-c64dc4b42f83\" (UID: \"4d9795c7-e101-4593-a414-c64dc4b42f83\") " Feb 24 15:26:08 crc kubenswrapper[4982]: I0224 15:26:08.709343 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9795c7-e101-4593-a414-c64dc4b42f83-kube-api-access-trjk4" (OuterVolumeSpecName: "kube-api-access-trjk4") pod "4d9795c7-e101-4593-a414-c64dc4b42f83" (UID: "4d9795c7-e101-4593-a414-c64dc4b42f83"). InnerVolumeSpecName "kube-api-access-trjk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:26:08 crc kubenswrapper[4982]: I0224 15:26:08.737976 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:26:08 crc kubenswrapper[4982]: I0224 15:26:08.738044 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:26:08 crc kubenswrapper[4982]: I0224 15:26:08.806599 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trjk4\" (UniqueName: \"kubernetes.io/projected/4d9795c7-e101-4593-a414-c64dc4b42f83-kube-api-access-trjk4\") on node \"crc\" DevicePath \"\"" Feb 24 15:26:09 crc kubenswrapper[4982]: I0224 15:26:09.165793 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532446-qjtj2" event={"ID":"4d9795c7-e101-4593-a414-c64dc4b42f83","Type":"ContainerDied","Data":"283ff69ae604543d1d375d4549a1817ea39838cbc3f1fcd48ffd729c42c406d6"} Feb 24 15:26:09 crc kubenswrapper[4982]: I0224 15:26:09.165836 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="283ff69ae604543d1d375d4549a1817ea39838cbc3f1fcd48ffd729c42c406d6" Feb 24 15:26:09 crc kubenswrapper[4982]: I0224 15:26:09.165880 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532446-qjtj2" Feb 24 15:26:09 crc kubenswrapper[4982]: I0224 15:26:09.745421 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532440-qdpb4"] Feb 24 15:26:09 crc kubenswrapper[4982]: I0224 15:26:09.758567 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532440-qdpb4"] Feb 24 15:26:11 crc kubenswrapper[4982]: I0224 15:26:11.160471 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5837cfa1-d5c4-468c-b2a9-75042b7ad1a8" path="/var/lib/kubelet/pods/5837cfa1-d5c4-468c-b2a9-75042b7ad1a8/volumes" Feb 24 15:26:18 crc kubenswrapper[4982]: I0224 15:26:18.044164 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-h76wg"] Feb 24 15:26:18 crc kubenswrapper[4982]: I0224 15:26:18.056707 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-h76wg"] Feb 24 15:26:19 crc kubenswrapper[4982]: I0224 15:26:19.164108 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a6b69b0-ea04-45f2-9961-b3c44fec32b6" path="/var/lib/kubelet/pods/0a6b69b0-ea04-45f2-9961-b3c44fec32b6/volumes" Feb 24 15:26:30 crc kubenswrapper[4982]: I0224 15:26:30.038839 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wd76l"] Feb 24 15:26:30 crc kubenswrapper[4982]: I0224 15:26:30.054194 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wd76l"] Feb 24 15:26:31 crc kubenswrapper[4982]: I0224 15:26:31.161247 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c95d73d5-4913-4638-bfa1-fd9c7539ed88" path="/var/lib/kubelet/pods/c95d73d5-4913-4638-bfa1-fd9c7539ed88/volumes" Feb 24 15:26:38 crc kubenswrapper[4982]: I0224 15:26:38.738975 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:26:38 crc kubenswrapper[4982]: I0224 15:26:38.740839 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:26:38 crc kubenswrapper[4982]: I0224 15:26:38.740903 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:26:38 crc kubenswrapper[4982]: I0224 15:26:38.742047 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec6240805dd5fe25be3a90815c44ef652ece8f5ebedbe9f3e922c92054ae3159"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:26:38 crc kubenswrapper[4982]: I0224 15:26:38.742101 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://ec6240805dd5fe25be3a90815c44ef652ece8f5ebedbe9f3e922c92054ae3159" gracePeriod=600 Feb 24 15:26:39 crc kubenswrapper[4982]: I0224 15:26:39.529323 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="ec6240805dd5fe25be3a90815c44ef652ece8f5ebedbe9f3e922c92054ae3159" exitCode=0 Feb 24 15:26:39 crc kubenswrapper[4982]: I0224 15:26:39.529650 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"ec6240805dd5fe25be3a90815c44ef652ece8f5ebedbe9f3e922c92054ae3159"} Feb 24 15:26:39 crc kubenswrapper[4982]: I0224 15:26:39.529781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c"} Feb 24 15:26:39 crc kubenswrapper[4982]: I0224 15:26:39.529808 4982 scope.go:117] "RemoveContainer" containerID="88957df1b17e3ffc591d486702ab6c852bbd42906bf80aa91798111a1022e4d7" Feb 24 15:26:40 crc kubenswrapper[4982]: I0224 15:26:40.546340 4982 generic.go:334] "Generic (PLEG): container finished" podID="17fad281-67b9-48ec-b6a3-82ecd730d659" containerID="103c22e29a0f7e393396c684e71e42a2ea3602c5ade9cbc5eec9ce2c565cf181" exitCode=0 Feb 24 15:26:40 crc kubenswrapper[4982]: I0224 15:26:40.546427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" event={"ID":"17fad281-67b9-48ec-b6a3-82ecd730d659","Type":"ContainerDied","Data":"103c22e29a0f7e393396c684e71e42a2ea3602c5ade9cbc5eec9ce2c565cf181"} Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.068778 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.149882 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd2nh\" (UniqueName: \"kubernetes.io/projected/17fad281-67b9-48ec-b6a3-82ecd730d659-kube-api-access-jd2nh\") pod \"17fad281-67b9-48ec-b6a3-82ecd730d659\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.150130 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-inventory\") pod \"17fad281-67b9-48ec-b6a3-82ecd730d659\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.150294 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-ssh-key-openstack-edpm-ipam\") pod \"17fad281-67b9-48ec-b6a3-82ecd730d659\" (UID: \"17fad281-67b9-48ec-b6a3-82ecd730d659\") " Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.156604 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fad281-67b9-48ec-b6a3-82ecd730d659-kube-api-access-jd2nh" (OuterVolumeSpecName: "kube-api-access-jd2nh") pod "17fad281-67b9-48ec-b6a3-82ecd730d659" (UID: "17fad281-67b9-48ec-b6a3-82ecd730d659"). InnerVolumeSpecName "kube-api-access-jd2nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.184750 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "17fad281-67b9-48ec-b6a3-82ecd730d659" (UID: "17fad281-67b9-48ec-b6a3-82ecd730d659"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.185677 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-inventory" (OuterVolumeSpecName: "inventory") pod "17fad281-67b9-48ec-b6a3-82ecd730d659" (UID: "17fad281-67b9-48ec-b6a3-82ecd730d659"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.253717 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.253757 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd2nh\" (UniqueName: \"kubernetes.io/projected/17fad281-67b9-48ec-b6a3-82ecd730d659-kube-api-access-jd2nh\") on node \"crc\" DevicePath \"\"" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.253773 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17fad281-67b9-48ec-b6a3-82ecd730d659-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.579713 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" event={"ID":"17fad281-67b9-48ec-b6a3-82ecd730d659","Type":"ContainerDied","Data":"a1041277e4eeefdacd44df2f2c343721b26f124fab326ee7ba3b7ce15eb3257a"} Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.580074 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1041277e4eeefdacd44df2f2c343721b26f124fab326ee7ba3b7ce15eb3257a" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.579832 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.682729 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jb627"] Feb 24 15:26:42 crc kubenswrapper[4982]: E0224 15:26:42.683377 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9795c7-e101-4593-a414-c64dc4b42f83" containerName="oc" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.683403 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9795c7-e101-4593-a414-c64dc4b42f83" containerName="oc" Feb 24 15:26:42 crc kubenswrapper[4982]: E0224 15:26:42.683470 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fad281-67b9-48ec-b6a3-82ecd730d659" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.683483 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fad281-67b9-48ec-b6a3-82ecd730d659" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.683788 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9795c7-e101-4593-a414-c64dc4b42f83" containerName="oc" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.683821 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fad281-67b9-48ec-b6a3-82ecd730d659" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.684976 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.689456 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.689705 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.689879 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.693938 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.698037 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jb627"] Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.765760 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.766074 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226fc\" (UniqueName: \"kubernetes.io/projected/71c06248-ae2a-4b15-9774-c3f18e1e61cb-kube-api-access-226fc\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.766270 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.869933 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.870162 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.870360 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-226fc\" (UniqueName: \"kubernetes.io/projected/71c06248-ae2a-4b15-9774-c3f18e1e61cb-kube-api-access-226fc\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.878700 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.879741 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:42 crc kubenswrapper[4982]: I0224 15:26:42.900354 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-226fc\" (UniqueName: \"kubernetes.io/projected/71c06248-ae2a-4b15-9774-c3f18e1e61cb-kube-api-access-226fc\") pod \"ssh-known-hosts-edpm-deployment-jb627\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:43 crc kubenswrapper[4982]: I0224 15:26:43.013015 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:43 crc kubenswrapper[4982]: I0224 15:26:43.617308 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jb627"] Feb 24 15:26:43 crc kubenswrapper[4982]: W0224 15:26:43.617303 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71c06248_ae2a_4b15_9774_c3f18e1e61cb.slice/crio-33dc0d6b79b25f4b0b6d60f934f85af5fbb8d1488693610611f684737422c640 WatchSource:0}: Error finding container 33dc0d6b79b25f4b0b6d60f934f85af5fbb8d1488693610611f684737422c640: Status 404 returned error can't find the container with id 33dc0d6b79b25f4b0b6d60f934f85af5fbb8d1488693610611f684737422c640 Feb 24 15:26:44 crc kubenswrapper[4982]: I0224 15:26:44.603699 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" event={"ID":"71c06248-ae2a-4b15-9774-c3f18e1e61cb","Type":"ContainerStarted","Data":"aff3717ad09ae13f6630b9e31beef39f2faf5bbaae735be169a376f3219c99bc"} Feb 24 15:26:44 crc kubenswrapper[4982]: I0224 15:26:44.604313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" event={"ID":"71c06248-ae2a-4b15-9774-c3f18e1e61cb","Type":"ContainerStarted","Data":"33dc0d6b79b25f4b0b6d60f934f85af5fbb8d1488693610611f684737422c640"} Feb 24 15:26:44 crc kubenswrapper[4982]: I0224 15:26:44.633154 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" podStartSLOduration=2.180181864 podStartE2EDuration="2.633131769s" podCreationTimestamp="2026-02-24 15:26:42 +0000 UTC" firstStartedPulling="2026-02-24 15:26:43.62159383 +0000 UTC m=+2265.240652343" lastFinishedPulling="2026-02-24 15:26:44.074543755 +0000 UTC m=+2265.693602248" observedRunningTime="2026-02-24 15:26:44.620767813 +0000 UTC m=+2266.239826336" watchObservedRunningTime="2026-02-24 15:26:44.633131769 +0000 UTC m=+2266.252190282" Feb 24 15:26:46 crc kubenswrapper[4982]: I0224 15:26:46.771900 4982 scope.go:117] "RemoveContainer" containerID="5c1dde1444e0601ced52d96dca09169cec3c135d70e24f8e143b002dad0f65c6" Feb 24 15:26:46 crc kubenswrapper[4982]: I0224 15:26:46.803469 4982 scope.go:117] "RemoveContainer" containerID="da56872de2b8d7292f2b680fc740a16738467fd869d631a5999eac452d2b5756" Feb 24 15:26:46 crc kubenswrapper[4982]: I0224 15:26:46.867746 4982 scope.go:117] "RemoveContainer" containerID="8c5182fdc9f626ddde9dd9cd0a446a8c1ddb70a8d9a38a9e21ae823f7c7d02fd" Feb 24 15:26:46 crc kubenswrapper[4982]: I0224 15:26:46.963785 4982 scope.go:117] "RemoveContainer" containerID="be0111c31d3bd8f073bf5f4d422c284b5e81874e1ff64fe8bf13bdb697ecc77a" Feb 24 15:26:46 crc kubenswrapper[4982]: I0224 15:26:46.994522 4982 scope.go:117] "RemoveContainer" containerID="e94abd1a8c937c12b261a84c41223912ce51b68b1c0527f5219c9b09330b801e" Feb 24 15:26:47 crc kubenswrapper[4982]: I0224 15:26:47.063208 4982 scope.go:117] "RemoveContainer" containerID="0792e97b55948427a1e8ad4017199e9342c5e561f1b95377276378d9b189fb6a" Feb 24 15:26:51 crc kubenswrapper[4982]: I0224 15:26:51.678675 4982 generic.go:334] "Generic (PLEG): container finished" podID="71c06248-ae2a-4b15-9774-c3f18e1e61cb" containerID="aff3717ad09ae13f6630b9e31beef39f2faf5bbaae735be169a376f3219c99bc" exitCode=0 Feb 24 15:26:51 crc kubenswrapper[4982]: I0224 15:26:51.678760 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" event={"ID":"71c06248-ae2a-4b15-9774-c3f18e1e61cb","Type":"ContainerDied","Data":"aff3717ad09ae13f6630b9e31beef39f2faf5bbaae735be169a376f3219c99bc"} Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.184056 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.245241 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-226fc\" (UniqueName: \"kubernetes.io/projected/71c06248-ae2a-4b15-9774-c3f18e1e61cb-kube-api-access-226fc\") pod \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.245469 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-inventory-0\") pod \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.245511 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-ssh-key-openstack-edpm-ipam\") pod \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\" (UID: \"71c06248-ae2a-4b15-9774-c3f18e1e61cb\") " Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.256247 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c06248-ae2a-4b15-9774-c3f18e1e61cb-kube-api-access-226fc" (OuterVolumeSpecName: "kube-api-access-226fc") pod "71c06248-ae2a-4b15-9774-c3f18e1e61cb" (UID: "71c06248-ae2a-4b15-9774-c3f18e1e61cb"). InnerVolumeSpecName "kube-api-access-226fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.318253 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71c06248-ae2a-4b15-9774-c3f18e1e61cb" (UID: "71c06248-ae2a-4b15-9774-c3f18e1e61cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.320335 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "71c06248-ae2a-4b15-9774-c3f18e1e61cb" (UID: "71c06248-ae2a-4b15-9774-c3f18e1e61cb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.348726 4982 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.348758 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71c06248-ae2a-4b15-9774-c3f18e1e61cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.348769 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-226fc\" (UniqueName: \"kubernetes.io/projected/71c06248-ae2a-4b15-9774-c3f18e1e61cb-kube-api-access-226fc\") on node \"crc\" DevicePath \"\"" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.702677 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" event={"ID":"71c06248-ae2a-4b15-9774-c3f18e1e61cb","Type":"ContainerDied","Data":"33dc0d6b79b25f4b0b6d60f934f85af5fbb8d1488693610611f684737422c640"} Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.702712 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33dc0d6b79b25f4b0b6d60f934f85af5fbb8d1488693610611f684737422c640" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.702815 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jb627" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.778379 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8"] Feb 24 15:26:53 crc kubenswrapper[4982]: E0224 15:26:53.778942 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c06248-ae2a-4b15-9774-c3f18e1e61cb" containerName="ssh-known-hosts-edpm-deployment" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.778967 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c06248-ae2a-4b15-9774-c3f18e1e61cb" containerName="ssh-known-hosts-edpm-deployment" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.779289 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c06248-ae2a-4b15-9774-c3f18e1e61cb" containerName="ssh-known-hosts-edpm-deployment" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.780312 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.785295 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.785596 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.786553 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.786799 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.810958 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8"] Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.859985 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.860053 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95gdv\" (UniqueName: \"kubernetes.io/projected/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-kube-api-access-95gdv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.860553 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.963294 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.963425 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.963820 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95gdv\" (UniqueName: \"kubernetes.io/projected/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-kube-api-access-95gdv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.969563 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.969830 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:53 crc kubenswrapper[4982]: I0224 15:26:53.983091 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95gdv\" (UniqueName: \"kubernetes.io/projected/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-kube-api-access-95gdv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-rlds8\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:54 crc kubenswrapper[4982]: I0224 15:26:54.106137 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:26:54 crc kubenswrapper[4982]: I0224 15:26:54.688072 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8"] Feb 24 15:26:54 crc kubenswrapper[4982]: I0224 15:26:54.715075 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" event={"ID":"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda","Type":"ContainerStarted","Data":"28eb5a68ef9dbde73cd4e6749386a356a16410d5d26f2f6b93907325480b6dbb"} Feb 24 15:26:55 crc kubenswrapper[4982]: I0224 15:26:55.726434 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" event={"ID":"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda","Type":"ContainerStarted","Data":"5e8aa3df5627242b3c8e9868999a47eca309a8881c81d2f5acff7a6aff379f99"} Feb 24 15:26:55 crc kubenswrapper[4982]: I0224 15:26:55.748869 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" podStartSLOduration=2.287168872 podStartE2EDuration="2.748847215s" podCreationTimestamp="2026-02-24 15:26:53 +0000 UTC" firstStartedPulling="2026-02-24 15:26:54.693378313 +0000 UTC m=+2276.312436806" lastFinishedPulling="2026-02-24 15:26:55.155056646 +0000 UTC m=+2276.774115149" observedRunningTime="2026-02-24 15:26:55.745651819 +0000 UTC m=+2277.364710322" watchObservedRunningTime="2026-02-24 15:26:55.748847215 +0000 UTC m=+2277.367905718" Feb 24 15:27:02 crc kubenswrapper[4982]: I0224 15:27:02.806692 4982 generic.go:334] "Generic (PLEG): container finished" podID="d53e94d7-cbbe-429c-8b0e-98bb41fb3dda" containerID="5e8aa3df5627242b3c8e9868999a47eca309a8881c81d2f5acff7a6aff379f99" exitCode=0 Feb 24 15:27:02 crc kubenswrapper[4982]: I0224 15:27:02.807558 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" event={"ID":"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda","Type":"ContainerDied","Data":"5e8aa3df5627242b3c8e9868999a47eca309a8881c81d2f5acff7a6aff379f99"} Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.301924 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.427522 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-inventory\") pod \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.428015 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-ssh-key-openstack-edpm-ipam\") pod \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.428147 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95gdv\" (UniqueName: \"kubernetes.io/projected/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-kube-api-access-95gdv\") pod \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\" (UID: \"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda\") " Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.434217 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-kube-api-access-95gdv" (OuterVolumeSpecName: "kube-api-access-95gdv") pod "d53e94d7-cbbe-429c-8b0e-98bb41fb3dda" (UID: "d53e94d7-cbbe-429c-8b0e-98bb41fb3dda"). InnerVolumeSpecName "kube-api-access-95gdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.468388 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-inventory" (OuterVolumeSpecName: "inventory") pod "d53e94d7-cbbe-429c-8b0e-98bb41fb3dda" (UID: "d53e94d7-cbbe-429c-8b0e-98bb41fb3dda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.481217 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d53e94d7-cbbe-429c-8b0e-98bb41fb3dda" (UID: "d53e94d7-cbbe-429c-8b0e-98bb41fb3dda"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.530600 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.530837 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95gdv\" (UniqueName: \"kubernetes.io/projected/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-kube-api-access-95gdv\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.530898 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d53e94d7-cbbe-429c-8b0e-98bb41fb3dda-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.826326 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" event={"ID":"d53e94d7-cbbe-429c-8b0e-98bb41fb3dda","Type":"ContainerDied","Data":"28eb5a68ef9dbde73cd4e6749386a356a16410d5d26f2f6b93907325480b6dbb"} Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.826683 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28eb5a68ef9dbde73cd4e6749386a356a16410d5d26f2f6b93907325480b6dbb" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.826412 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-rlds8" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.913710 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb"] Feb 24 15:27:04 crc kubenswrapper[4982]: E0224 15:27:04.914205 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53e94d7-cbbe-429c-8b0e-98bb41fb3dda" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.914227 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53e94d7-cbbe-429c-8b0e-98bb41fb3dda" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.914472 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53e94d7-cbbe-429c-8b0e-98bb41fb3dda" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.915266 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.917001 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.917511 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.917709 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.917536 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:27:04 crc kubenswrapper[4982]: I0224 15:27:04.930994 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb"] Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.043594 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.043653 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh8rw\" (UniqueName: \"kubernetes.io/projected/905ac17f-4256-40f9-b638-454713515dc3-kube-api-access-vh8rw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.043688 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.146620 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.146733 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh8rw\" (UniqueName: \"kubernetes.io/projected/905ac17f-4256-40f9-b638-454713515dc3-kube-api-access-vh8rw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.146801 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.152554 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.152678 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.165171 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh8rw\" (UniqueName: \"kubernetes.io/projected/905ac17f-4256-40f9-b638-454713515dc3-kube-api-access-vh8rw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.237904 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:05 crc kubenswrapper[4982]: I0224 15:27:05.962937 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb"] Feb 24 15:27:06 crc kubenswrapper[4982]: I0224 15:27:06.845428 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" event={"ID":"905ac17f-4256-40f9-b638-454713515dc3","Type":"ContainerStarted","Data":"3046f9cee31e85be3513c99f6a8cf5d03143c2e2ae0d849f093b0e3830da2998"} Feb 24 15:27:06 crc kubenswrapper[4982]: I0224 15:27:06.845786 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" event={"ID":"905ac17f-4256-40f9-b638-454713515dc3","Type":"ContainerStarted","Data":"1fbc1724e75113941a2734c3634e2a4c8a940c4b8f644da1ebc8bd8e8e687854"} Feb 24 15:27:06 crc kubenswrapper[4982]: I0224 15:27:06.871657 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" podStartSLOduration=2.402891441 podStartE2EDuration="2.871637196s" podCreationTimestamp="2026-02-24 15:27:04 +0000 UTC" firstStartedPulling="2026-02-24 15:27:05.959737222 +0000 UTC m=+2287.578795715" lastFinishedPulling="2026-02-24 15:27:06.428482977 +0000 UTC m=+2288.047541470" observedRunningTime="2026-02-24 15:27:06.861801809 +0000 UTC m=+2288.480860312" watchObservedRunningTime="2026-02-24 15:27:06.871637196 +0000 UTC m=+2288.490695689" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.049731 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-grsdc"] Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.062182 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-grsdc"] Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.129887 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4qg5q"] Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.132595 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.144281 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qg5q"] Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.260999 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvb46\" (UniqueName: \"kubernetes.io/projected/bf1cbf79-6c40-42cb-a226-6849e3259e79-kube-api-access-dvb46\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.261600 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-utilities\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.261933 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-catalog-content\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.365147 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-utilities\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.365290 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-catalog-content\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.365347 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvb46\" (UniqueName: \"kubernetes.io/projected/bf1cbf79-6c40-42cb-a226-6849e3259e79-kube-api-access-dvb46\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.365876 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-utilities\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.366014 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-catalog-content\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.391407 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvb46\" (UniqueName: \"kubernetes.io/projected/bf1cbf79-6c40-42cb-a226-6849e3259e79-kube-api-access-dvb46\") pod \"redhat-marketplace-4qg5q\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.459579 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:08 crc kubenswrapper[4982]: I0224 15:27:08.972385 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qg5q"] Feb 24 15:27:09 crc kubenswrapper[4982]: I0224 15:27:09.161064 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d91be378-9ad1-4685-b2b2-3f96f2f6fc88" path="/var/lib/kubelet/pods/d91be378-9ad1-4685-b2b2-3f96f2f6fc88/volumes" Feb 24 15:27:09 crc kubenswrapper[4982]: I0224 15:27:09.890415 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerID="9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa" exitCode=0 Feb 24 15:27:09 crc kubenswrapper[4982]: I0224 15:27:09.890475 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qg5q" event={"ID":"bf1cbf79-6c40-42cb-a226-6849e3259e79","Type":"ContainerDied","Data":"9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa"} Feb 24 15:27:09 crc kubenswrapper[4982]: I0224 15:27:09.890771 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qg5q" event={"ID":"bf1cbf79-6c40-42cb-a226-6849e3259e79","Type":"ContainerStarted","Data":"d64a2c4e6b1b69f6711cd163068b36bfeff581bdb8bd90b4e306de4cb48366ea"} Feb 24 15:27:11 crc kubenswrapper[4982]: I0224 15:27:11.913592 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerID="429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4" exitCode=0 Feb 24 15:27:11 crc kubenswrapper[4982]: I0224 15:27:11.913811 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qg5q" event={"ID":"bf1cbf79-6c40-42cb-a226-6849e3259e79","Type":"ContainerDied","Data":"429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4"} Feb 24 15:27:12 crc kubenswrapper[4982]: I0224 15:27:12.927632 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qg5q" event={"ID":"bf1cbf79-6c40-42cb-a226-6849e3259e79","Type":"ContainerStarted","Data":"5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9"} Feb 24 15:27:12 crc kubenswrapper[4982]: I0224 15:27:12.948874 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4qg5q" podStartSLOduration=2.381458365 podStartE2EDuration="4.948855159s" podCreationTimestamp="2026-02-24 15:27:08 +0000 UTC" firstStartedPulling="2026-02-24 15:27:09.894136096 +0000 UTC m=+2291.513194589" lastFinishedPulling="2026-02-24 15:27:12.46153288 +0000 UTC m=+2294.080591383" observedRunningTime="2026-02-24 15:27:12.943912385 +0000 UTC m=+2294.562970898" watchObservedRunningTime="2026-02-24 15:27:12.948855159 +0000 UTC m=+2294.567913652" Feb 24 15:27:15 crc kubenswrapper[4982]: I0224 15:27:15.967821 4982 generic.go:334] "Generic (PLEG): container finished" podID="905ac17f-4256-40f9-b638-454713515dc3" containerID="3046f9cee31e85be3513c99f6a8cf5d03143c2e2ae0d849f093b0e3830da2998" exitCode=0 Feb 24 15:27:15 crc kubenswrapper[4982]: I0224 15:27:15.967941 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" event={"ID":"905ac17f-4256-40f9-b638-454713515dc3","Type":"ContainerDied","Data":"3046f9cee31e85be3513c99f6a8cf5d03143c2e2ae0d849f093b0e3830da2998"} Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.477218 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.505186 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh8rw\" (UniqueName: \"kubernetes.io/projected/905ac17f-4256-40f9-b638-454713515dc3-kube-api-access-vh8rw\") pod \"905ac17f-4256-40f9-b638-454713515dc3\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.505609 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-ssh-key-openstack-edpm-ipam\") pod \"905ac17f-4256-40f9-b638-454713515dc3\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.505752 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-inventory\") pod \"905ac17f-4256-40f9-b638-454713515dc3\" (UID: \"905ac17f-4256-40f9-b638-454713515dc3\") " Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.511454 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905ac17f-4256-40f9-b638-454713515dc3-kube-api-access-vh8rw" (OuterVolumeSpecName: "kube-api-access-vh8rw") pod "905ac17f-4256-40f9-b638-454713515dc3" (UID: "905ac17f-4256-40f9-b638-454713515dc3"). InnerVolumeSpecName "kube-api-access-vh8rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.542333 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-inventory" (OuterVolumeSpecName: "inventory") pod "905ac17f-4256-40f9-b638-454713515dc3" (UID: "905ac17f-4256-40f9-b638-454713515dc3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.543980 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "905ac17f-4256-40f9-b638-454713515dc3" (UID: "905ac17f-4256-40f9-b638-454713515dc3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.609760 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh8rw\" (UniqueName: \"kubernetes.io/projected/905ac17f-4256-40f9-b638-454713515dc3-kube-api-access-vh8rw\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.610119 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.610239 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/905ac17f-4256-40f9-b638-454713515dc3-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.993898 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" event={"ID":"905ac17f-4256-40f9-b638-454713515dc3","Type":"ContainerDied","Data":"1fbc1724e75113941a2734c3634e2a4c8a940c4b8f644da1ebc8bd8e8e687854"} Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.993951 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fbc1724e75113941a2734c3634e2a4c8a940c4b8f644da1ebc8bd8e8e687854" Feb 24 15:27:17 crc kubenswrapper[4982]: I0224 15:27:17.994345 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.108295 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv"] Feb 24 15:27:18 crc kubenswrapper[4982]: E0224 15:27:18.109178 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905ac17f-4256-40f9-b638-454713515dc3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.109215 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="905ac17f-4256-40f9-b638-454713515dc3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.109654 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="905ac17f-4256-40f9-b638-454713515dc3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.110696 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.116746 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.116993 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.117119 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.117241 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.117597 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.117812 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.117858 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.119673 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.120419 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.125933 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv"] Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.225709 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.225768 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.225822 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.225932 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.225997 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226078 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226110 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226152 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226233 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226292 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226347 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226384 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226421 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226446 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nhtk\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-kube-api-access-6nhtk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226474 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.226550 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329029 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329115 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329168 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329205 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329238 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329266 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nhtk\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-kube-api-access-6nhtk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329289 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329398 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329776 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329819 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329885 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329925 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329962 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.329985 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.330013 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.334232 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.335370 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.335552 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.336284 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.336666 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.338059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.338310 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.338335 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.339108 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.339900 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.340075 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.340964 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.341579 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.341643 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.341989 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.353464 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nhtk\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-kube-api-access-6nhtk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.433272 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.459995 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.460066 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:18 crc kubenswrapper[4982]: I0224 15:27:18.532675 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:19 crc kubenswrapper[4982]: I0224 15:27:19.048546 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv"] Feb 24 15:27:19 crc kubenswrapper[4982]: I0224 15:27:19.080066 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:19 crc kubenswrapper[4982]: I0224 15:27:19.157229 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qg5q"] Feb 24 15:27:20 crc kubenswrapper[4982]: I0224 15:27:20.017592 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" event={"ID":"6a256c5e-deb6-4688-a9d7-080502e07609","Type":"ContainerStarted","Data":"5702ca6912f85d7edeee573195002b6eddd0b1e99abace8771fac8737c71fc1b"} Feb 24 15:27:20 crc kubenswrapper[4982]: I0224 15:27:20.018483 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" event={"ID":"6a256c5e-deb6-4688-a9d7-080502e07609","Type":"ContainerStarted","Data":"18bd022b411369ef269327787e1ec1cf6198ff97ed1767e61c4e5ddf88f3d1dc"} Feb 24 15:27:20 crc kubenswrapper[4982]: I0224 15:27:20.047355 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" podStartSLOduration=1.492537474 podStartE2EDuration="2.047334295s" podCreationTimestamp="2026-02-24 15:27:18 +0000 UTC" firstStartedPulling="2026-02-24 15:27:19.061776481 +0000 UTC m=+2300.680834974" lastFinishedPulling="2026-02-24 15:27:19.616573302 +0000 UTC m=+2301.235631795" observedRunningTime="2026-02-24 15:27:20.042674089 +0000 UTC m=+2301.661732592" watchObservedRunningTime="2026-02-24 15:27:20.047334295 +0000 UTC m=+2301.666392788" Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.026530 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4qg5q" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerName="registry-server" containerID="cri-o://5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9" gracePeriod=2 Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.558711 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.629831 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-utilities\") pod \"bf1cbf79-6c40-42cb-a226-6849e3259e79\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.630181 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-catalog-content\") pod \"bf1cbf79-6c40-42cb-a226-6849e3259e79\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.630267 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvb46\" (UniqueName: \"kubernetes.io/projected/bf1cbf79-6c40-42cb-a226-6849e3259e79-kube-api-access-dvb46\") pod \"bf1cbf79-6c40-42cb-a226-6849e3259e79\" (UID: \"bf1cbf79-6c40-42cb-a226-6849e3259e79\") " Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.632295 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-utilities" (OuterVolumeSpecName: "utilities") pod "bf1cbf79-6c40-42cb-a226-6849e3259e79" (UID: "bf1cbf79-6c40-42cb-a226-6849e3259e79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.653772 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1cbf79-6c40-42cb-a226-6849e3259e79-kube-api-access-dvb46" (OuterVolumeSpecName: "kube-api-access-dvb46") pod "bf1cbf79-6c40-42cb-a226-6849e3259e79" (UID: "bf1cbf79-6c40-42cb-a226-6849e3259e79"). InnerVolumeSpecName "kube-api-access-dvb46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.656720 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf1cbf79-6c40-42cb-a226-6849e3259e79" (UID: "bf1cbf79-6c40-42cb-a226-6849e3259e79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.733000 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.733045 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvb46\" (UniqueName: \"kubernetes.io/projected/bf1cbf79-6c40-42cb-a226-6849e3259e79-kube-api-access-dvb46\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:21 crc kubenswrapper[4982]: I0224 15:27:21.733064 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf1cbf79-6c40-42cb-a226-6849e3259e79-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.049139 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerID="5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9" exitCode=0 Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.049237 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qg5q" event={"ID":"bf1cbf79-6c40-42cb-a226-6849e3259e79","Type":"ContainerDied","Data":"5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9"} Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.049270 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qg5q" event={"ID":"bf1cbf79-6c40-42cb-a226-6849e3259e79","Type":"ContainerDied","Data":"d64a2c4e6b1b69f6711cd163068b36bfeff581bdb8bd90b4e306de4cb48366ea"} Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.049290 4982 scope.go:117] "RemoveContainer" containerID="5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.049566 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qg5q" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.096749 4982 scope.go:117] "RemoveContainer" containerID="429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.132878 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qg5q"] Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.144181 4982 scope.go:117] "RemoveContainer" containerID="9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.147217 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qg5q"] Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.195933 4982 scope.go:117] "RemoveContainer" containerID="5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9" Feb 24 15:27:22 crc kubenswrapper[4982]: E0224 15:27:22.196272 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9\": container with ID starting with 5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9 not found: ID does not exist" containerID="5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.196307 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9"} err="failed to get container status \"5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9\": rpc error: code = NotFound desc = could not find container \"5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9\": container with ID starting with 5a33adccd09bdc8884e56860fb4f46972a4c4e414c81083d710515bbdb0398f9 not found: ID does not exist" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.196336 4982 scope.go:117] "RemoveContainer" containerID="429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4" Feb 24 15:27:22 crc kubenswrapper[4982]: E0224 15:27:22.196850 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4\": container with ID starting with 429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4 not found: ID does not exist" containerID="429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.196885 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4"} err="failed to get container status \"429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4\": rpc error: code = NotFound desc = could not find container \"429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4\": container with ID starting with 429a857d572a6d2a7530fc36dab51ba88d800b28466ad2b25eed1fb0841779d4 not found: ID does not exist" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.196905 4982 scope.go:117] "RemoveContainer" containerID="9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa" Feb 24 15:27:22 crc kubenswrapper[4982]: E0224 15:27:22.197296 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa\": container with ID starting with 9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa not found: ID does not exist" containerID="9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa" Feb 24 15:27:22 crc kubenswrapper[4982]: I0224 15:27:22.197326 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa"} err="failed to get container status \"9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa\": rpc error: code = NotFound desc = could not find container \"9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa\": container with ID starting with 9dc2c3147cb180c93e1ca1c0e3d3d2f6a1a5e2c528e7b0d2e7cd2f433abd70fa not found: ID does not exist" Feb 24 15:27:23 crc kubenswrapper[4982]: I0224 15:27:23.159668 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" path="/var/lib/kubelet/pods/bf1cbf79-6c40-42cb-a226-6849e3259e79/volumes" Feb 24 15:27:47 crc kubenswrapper[4982]: I0224 15:27:47.266830 4982 scope.go:117] "RemoveContainer" containerID="3249a603e0f5078d12cb824b322cbea3865cacff30dd643dc621f627d1de372c" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.157468 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532448-v4r99"] Feb 24 15:28:00 crc kubenswrapper[4982]: E0224 15:28:00.159211 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerName="extract-utilities" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.159482 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerName="extract-utilities" Feb 24 15:28:00 crc kubenswrapper[4982]: E0224 15:28:00.159618 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerName="registry-server" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.159655 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerName="registry-server" Feb 24 15:28:00 crc kubenswrapper[4982]: E0224 15:28:00.159745 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerName="extract-content" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.159770 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerName="extract-content" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.160377 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1cbf79-6c40-42cb-a226-6849e3259e79" containerName="registry-server" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.162452 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532448-v4r99" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.165190 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.165410 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.165479 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.181274 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532448-v4r99"] Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.276837 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2xc\" (UniqueName: \"kubernetes.io/projected/d3f617b2-6e46-4b3f-9bdf-999597d27e87-kube-api-access-vs2xc\") pod \"auto-csr-approver-29532448-v4r99\" (UID: \"d3f617b2-6e46-4b3f-9bdf-999597d27e87\") " pod="openshift-infra/auto-csr-approver-29532448-v4r99" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.380167 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs2xc\" (UniqueName: \"kubernetes.io/projected/d3f617b2-6e46-4b3f-9bdf-999597d27e87-kube-api-access-vs2xc\") pod \"auto-csr-approver-29532448-v4r99\" (UID: \"d3f617b2-6e46-4b3f-9bdf-999597d27e87\") " pod="openshift-infra/auto-csr-approver-29532448-v4r99" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.417383 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs2xc\" (UniqueName: \"kubernetes.io/projected/d3f617b2-6e46-4b3f-9bdf-999597d27e87-kube-api-access-vs2xc\") pod \"auto-csr-approver-29532448-v4r99\" (UID: \"d3f617b2-6e46-4b3f-9bdf-999597d27e87\") " pod="openshift-infra/auto-csr-approver-29532448-v4r99" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.494001 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532448-v4r99" Feb 24 15:28:00 crc kubenswrapper[4982]: I0224 15:28:00.998604 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532448-v4r99"] Feb 24 15:28:01 crc kubenswrapper[4982]: I0224 15:28:01.512012 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532448-v4r99" event={"ID":"d3f617b2-6e46-4b3f-9bdf-999597d27e87","Type":"ContainerStarted","Data":"7f689fb73ac815a737ab25c1436cfabc38d76c57b9285384bd788c79eb327cf2"} Feb 24 15:28:01 crc kubenswrapper[4982]: I0224 15:28:01.515917 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a256c5e-deb6-4688-a9d7-080502e07609" containerID="5702ca6912f85d7edeee573195002b6eddd0b1e99abace8771fac8737c71fc1b" exitCode=0 Feb 24 15:28:01 crc kubenswrapper[4982]: I0224 15:28:01.515967 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" event={"ID":"6a256c5e-deb6-4688-a9d7-080502e07609","Type":"ContainerDied","Data":"5702ca6912f85d7edeee573195002b6eddd0b1e99abace8771fac8737c71fc1b"} Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.097645 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.183195 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-bootstrap-combined-ca-bundle\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.183243 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-inventory\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.183269 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-power-monitoring-combined-ca-bundle\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.183319 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ssh-key-openstack-edpm-ipam\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.191421 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.191835 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.233265 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-inventory" (OuterVolumeSpecName: "inventory") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.285885 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.285945 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.285985 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286015 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286047 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-ovn-default-certs-0\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286101 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-nova-combined-ca-bundle\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286138 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ovn-combined-ca-bundle\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286163 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nhtk\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-kube-api-access-6nhtk\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286271 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-repo-setup-combined-ca-bundle\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286312 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-neutron-metadata-combined-ca-bundle\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286352 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-libvirt-combined-ca-bundle\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286382 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-combined-ca-bundle\") pod \"6a256c5e-deb6-4688-a9d7-080502e07609\" (UID: \"6a256c5e-deb6-4688-a9d7-080502e07609\") " Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286905 4982 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286926 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.286940 4982 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.303800 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.340747 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-kube-api-access-6nhtk" (OuterVolumeSpecName: "kube-api-access-6nhtk") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "kube-api-access-6nhtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.341129 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.341464 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.343151 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.351344 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.355864 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.355909 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.355950 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.355967 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.355998 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.369236 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.394773 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6a256c5e-deb6-4688-a9d7-080502e07609" (UID: "6a256c5e-deb6-4688-a9d7-080502e07609"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398240 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398295 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398312 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398326 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398340 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398354 4982 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398366 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398377 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nhtk\" (UniqueName: \"kubernetes.io/projected/6a256c5e-deb6-4688-a9d7-080502e07609-kube-api-access-6nhtk\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398397 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398413 4982 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398427 4982 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398440 4982 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.398453 4982 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a256c5e-deb6-4688-a9d7-080502e07609-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.550842 4982 generic.go:334] "Generic (PLEG): container finished" podID="d3f617b2-6e46-4b3f-9bdf-999597d27e87" containerID="e28d002b1d0b1a1ba5df2632631eca92fcda177e89ae678e522cf44c713403e9" exitCode=0 Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.550902 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532448-v4r99" event={"ID":"d3f617b2-6e46-4b3f-9bdf-999597d27e87","Type":"ContainerDied","Data":"e28d002b1d0b1a1ba5df2632631eca92fcda177e89ae678e522cf44c713403e9"} Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.561174 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" event={"ID":"6a256c5e-deb6-4688-a9d7-080502e07609","Type":"ContainerDied","Data":"18bd022b411369ef269327787e1ec1cf6198ff97ed1767e61c4e5ddf88f3d1dc"} Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.561214 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18bd022b411369ef269327787e1ec1cf6198ff97ed1767e61c4e5ddf88f3d1dc" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.561297 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.644349 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7"] Feb 24 15:28:03 crc kubenswrapper[4982]: E0224 15:28:03.644839 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a256c5e-deb6-4688-a9d7-080502e07609" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.644852 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a256c5e-deb6-4688-a9d7-080502e07609" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.645058 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a256c5e-deb6-4688-a9d7-080502e07609" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.645874 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.651530 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.651545 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.651747 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.651830 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.651861 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.684993 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7"] Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.705130 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.705196 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.705279 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rjc\" (UniqueName: \"kubernetes.io/projected/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-kube-api-access-65rjc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.705313 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.705451 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.807136 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.807260 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.807286 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.807344 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65rjc\" (UniqueName: \"kubernetes.io/projected/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-kube-api-access-65rjc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.807368 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.808081 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.810854 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.811230 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.811329 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.839427 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65rjc\" (UniqueName: \"kubernetes.io/projected/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-kube-api-access-65rjc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-srvk7\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:03 crc kubenswrapper[4982]: I0224 15:28:03.974182 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:28:04 crc kubenswrapper[4982]: I0224 15:28:04.551063 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7"] Feb 24 15:28:04 crc kubenswrapper[4982]: I0224 15:28:04.573532 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" event={"ID":"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1","Type":"ContainerStarted","Data":"7c6c0ebd005804757a57bca7337565b741543bafc895e24bc4faef6f1b5a54e6"} Feb 24 15:28:04 crc kubenswrapper[4982]: I0224 15:28:04.959225 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532448-v4r99" Feb 24 15:28:05 crc kubenswrapper[4982]: I0224 15:28:05.037610 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs2xc\" (UniqueName: \"kubernetes.io/projected/d3f617b2-6e46-4b3f-9bdf-999597d27e87-kube-api-access-vs2xc\") pod \"d3f617b2-6e46-4b3f-9bdf-999597d27e87\" (UID: \"d3f617b2-6e46-4b3f-9bdf-999597d27e87\") " Feb 24 15:28:05 crc kubenswrapper[4982]: I0224 15:28:05.043601 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f617b2-6e46-4b3f-9bdf-999597d27e87-kube-api-access-vs2xc" (OuterVolumeSpecName: "kube-api-access-vs2xc") pod "d3f617b2-6e46-4b3f-9bdf-999597d27e87" (UID: "d3f617b2-6e46-4b3f-9bdf-999597d27e87"). InnerVolumeSpecName "kube-api-access-vs2xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:28:05 crc kubenswrapper[4982]: I0224 15:28:05.140766 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs2xc\" (UniqueName: \"kubernetes.io/projected/d3f617b2-6e46-4b3f-9bdf-999597d27e87-kube-api-access-vs2xc\") on node \"crc\" DevicePath \"\"" Feb 24 15:28:05 crc kubenswrapper[4982]: I0224 15:28:05.587922 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" event={"ID":"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1","Type":"ContainerStarted","Data":"f6a678640fec0964103be7acf50016e80140d802a69eede77e7af62f94d51549"} Feb 24 15:28:05 crc kubenswrapper[4982]: I0224 15:28:05.589578 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532448-v4r99" event={"ID":"d3f617b2-6e46-4b3f-9bdf-999597d27e87","Type":"ContainerDied","Data":"7f689fb73ac815a737ab25c1436cfabc38d76c57b9285384bd788c79eb327cf2"} Feb 24 15:28:05 crc kubenswrapper[4982]: I0224 15:28:05.589640 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f689fb73ac815a737ab25c1436cfabc38d76c57b9285384bd788c79eb327cf2" Feb 24 15:28:05 crc kubenswrapper[4982]: I0224 15:28:05.589641 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532448-v4r99" Feb 24 15:28:05 crc kubenswrapper[4982]: I0224 15:28:05.617940 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" podStartSLOduration=2.22162608 podStartE2EDuration="2.617911388s" podCreationTimestamp="2026-02-24 15:28:03 +0000 UTC" firstStartedPulling="2026-02-24 15:28:04.549432793 +0000 UTC m=+2346.168491286" lastFinishedPulling="2026-02-24 15:28:04.945718101 +0000 UTC m=+2346.564776594" observedRunningTime="2026-02-24 15:28:05.607935107 +0000 UTC m=+2347.226993600" watchObservedRunningTime="2026-02-24 15:28:05.617911388 +0000 UTC m=+2347.236969921" Feb 24 15:28:06 crc kubenswrapper[4982]: I0224 15:28:06.046583 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532442-hngb4"] Feb 24 15:28:06 crc kubenswrapper[4982]: I0224 15:28:06.059798 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532442-hngb4"] Feb 24 15:28:07 crc kubenswrapper[4982]: I0224 15:28:07.170802 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d51af88-ef7d-46e7-ab4b-d1ccf200a068" path="/var/lib/kubelet/pods/2d51af88-ef7d-46e7-ab4b-d1ccf200a068/volumes" Feb 24 15:28:28 crc kubenswrapper[4982]: I0224 15:28:28.073466 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-qf6l6"] Feb 24 15:28:28 crc kubenswrapper[4982]: I0224 15:28:28.098154 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-qf6l6"] Feb 24 15:28:29 crc kubenswrapper[4982]: I0224 15:28:29.160197 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c" path="/var/lib/kubelet/pods/bc7dc616-5801-43d1-b2d7-c1b1aef4ed1c/volumes" Feb 24 15:28:47 crc kubenswrapper[4982]: I0224 15:28:47.423502 4982 scope.go:117] "RemoveContainer" containerID="4f8de8f1edd75dc7844f16517d4b103f53648a8863bb2ceacca8474df343827c" Feb 24 15:28:47 crc kubenswrapper[4982]: I0224 15:28:47.471479 4982 scope.go:117] "RemoveContainer" containerID="8c0b9c3d57633e69505bf14a2d38b447dfff72838d448b8b95fbe43d9db99c4b" Feb 24 15:29:06 crc kubenswrapper[4982]: I0224 15:29:06.292171 4982 generic.go:334] "Generic (PLEG): container finished" podID="1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" containerID="f6a678640fec0964103be7acf50016e80140d802a69eede77e7af62f94d51549" exitCode=0 Feb 24 15:29:06 crc kubenswrapper[4982]: I0224 15:29:06.292212 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" event={"ID":"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1","Type":"ContainerDied","Data":"f6a678640fec0964103be7acf50016e80140d802a69eede77e7af62f94d51549"} Feb 24 15:29:07 crc kubenswrapper[4982]: I0224 15:29:07.942069 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.043328 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-7nhgx"] Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.061833 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-7nhgx"] Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.062015 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovn-combined-ca-bundle\") pod \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.062169 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovncontroller-config-0\") pod \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.062201 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65rjc\" (UniqueName: \"kubernetes.io/projected/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-kube-api-access-65rjc\") pod \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.062367 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-inventory\") pod \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.062462 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ssh-key-openstack-edpm-ipam\") pod \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\" (UID: \"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1\") " Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.077206 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-kube-api-access-65rjc" (OuterVolumeSpecName: "kube-api-access-65rjc") pod "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" (UID: "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1"). InnerVolumeSpecName "kube-api-access-65rjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.078786 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" (UID: "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.099288 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" (UID: "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.105314 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" (UID: "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.108696 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-inventory" (OuterVolumeSpecName: "inventory") pod "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" (UID: "1646ea82-1708-4dbe-b6af-7cc0c9fe35b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.165391 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65rjc\" (UniqueName: \"kubernetes.io/projected/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-kube-api-access-65rjc\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.165427 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.165440 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.165450 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.165461 4982 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1646ea82-1708-4dbe-b6af-7cc0c9fe35b1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.323121 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" event={"ID":"1646ea82-1708-4dbe-b6af-7cc0c9fe35b1","Type":"ContainerDied","Data":"7c6c0ebd005804757a57bca7337565b741543bafc895e24bc4faef6f1b5a54e6"} Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.323164 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c6c0ebd005804757a57bca7337565b741543bafc895e24bc4faef6f1b5a54e6" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.323227 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-srvk7" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.418293 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn"] Feb 24 15:29:08 crc kubenswrapper[4982]: E0224 15:29:08.418858 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f617b2-6e46-4b3f-9bdf-999597d27e87" containerName="oc" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.418879 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f617b2-6e46-4b3f-9bdf-999597d27e87" containerName="oc" Feb 24 15:29:08 crc kubenswrapper[4982]: E0224 15:29:08.418926 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.418935 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.419205 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f617b2-6e46-4b3f-9bdf-999597d27e87" containerName="oc" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.419232 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1646ea82-1708-4dbe-b6af-7cc0c9fe35b1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.420057 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.422160 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.422707 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.422840 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.428797 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.429319 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.429469 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.433223 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn"] Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.574511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpc4l\" (UniqueName: \"kubernetes.io/projected/c374a6b9-31c3-45f7-a188-ec0dc5df244d-kube-api-access-xpc4l\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.574906 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.575095 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.575233 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.575407 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.575587 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.677415 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpc4l\" (UniqueName: \"kubernetes.io/projected/c374a6b9-31c3-45f7-a188-ec0dc5df244d-kube-api-access-xpc4l\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.677472 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.677552 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.677585 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.677644 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.677661 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.683189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.683776 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.684924 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.687320 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.690064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.703719 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpc4l\" (UniqueName: \"kubernetes.io/projected/c374a6b9-31c3-45f7-a188-ec0dc5df244d-kube-api-access-xpc4l\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.737834 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.737910 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:29:08 crc kubenswrapper[4982]: I0224 15:29:08.754804 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:09 crc kubenswrapper[4982]: I0224 15:29:09.163352 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf05096-32e4-4781-8604-33ec559bec9a" path="/var/lib/kubelet/pods/5cf05096-32e4-4781-8604-33ec559bec9a/volumes" Feb 24 15:29:09 crc kubenswrapper[4982]: I0224 15:29:09.348187 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn"] Feb 24 15:29:09 crc kubenswrapper[4982]: I0224 15:29:09.358416 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:29:10 crc kubenswrapper[4982]: I0224 15:29:10.352416 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" event={"ID":"c374a6b9-31c3-45f7-a188-ec0dc5df244d","Type":"ContainerStarted","Data":"798249d5b3c15781fa2b519079b88648a1b1cbf527f8ffbd6321de2c13af9c4a"} Feb 24 15:29:10 crc kubenswrapper[4982]: I0224 15:29:10.352819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" event={"ID":"c374a6b9-31c3-45f7-a188-ec0dc5df244d","Type":"ContainerStarted","Data":"8a4d14a2ca20065bcbb9eac23192c5f40e02695ce4556148d81786922a09666c"} Feb 24 15:29:38 crc kubenswrapper[4982]: I0224 15:29:38.737780 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:29:38 crc kubenswrapper[4982]: I0224 15:29:38.738616 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:29:47 crc kubenswrapper[4982]: I0224 15:29:47.632952 4982 scope.go:117] "RemoveContainer" containerID="ad07d0da6ed74db672a59c9abc0971826c57268144cf04310c8816718776d557" Feb 24 15:29:56 crc kubenswrapper[4982]: I0224 15:29:56.878092 4982 generic.go:334] "Generic (PLEG): container finished" podID="c374a6b9-31c3-45f7-a188-ec0dc5df244d" containerID="798249d5b3c15781fa2b519079b88648a1b1cbf527f8ffbd6321de2c13af9c4a" exitCode=0 Feb 24 15:29:56 crc kubenswrapper[4982]: I0224 15:29:56.878182 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" event={"ID":"c374a6b9-31c3-45f7-a188-ec0dc5df244d","Type":"ContainerDied","Data":"798249d5b3c15781fa2b519079b88648a1b1cbf527f8ffbd6321de2c13af9c4a"} Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.419423 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.587110 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-ssh-key-openstack-edpm-ipam\") pod \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.587317 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpc4l\" (UniqueName: \"kubernetes.io/projected/c374a6b9-31c3-45f7-a188-ec0dc5df244d-kube-api-access-xpc4l\") pod \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.587386 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.587591 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-nova-metadata-neutron-config-0\") pod \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.587697 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-inventory\") pod \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.587727 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-metadata-combined-ca-bundle\") pod \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\" (UID: \"c374a6b9-31c3-45f7-a188-ec0dc5df244d\") " Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.597164 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c374a6b9-31c3-45f7-a188-ec0dc5df244d-kube-api-access-xpc4l" (OuterVolumeSpecName: "kube-api-access-xpc4l") pod "c374a6b9-31c3-45f7-a188-ec0dc5df244d" (UID: "c374a6b9-31c3-45f7-a188-ec0dc5df244d"). InnerVolumeSpecName "kube-api-access-xpc4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.600698 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c374a6b9-31c3-45f7-a188-ec0dc5df244d" (UID: "c374a6b9-31c3-45f7-a188-ec0dc5df244d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.648716 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c374a6b9-31c3-45f7-a188-ec0dc5df244d" (UID: "c374a6b9-31c3-45f7-a188-ec0dc5df244d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.648761 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c374a6b9-31c3-45f7-a188-ec0dc5df244d" (UID: "c374a6b9-31c3-45f7-a188-ec0dc5df244d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.658733 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c374a6b9-31c3-45f7-a188-ec0dc5df244d" (UID: "c374a6b9-31c3-45f7-a188-ec0dc5df244d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.658776 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-inventory" (OuterVolumeSpecName: "inventory") pod "c374a6b9-31c3-45f7-a188-ec0dc5df244d" (UID: "c374a6b9-31c3-45f7-a188-ec0dc5df244d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.691203 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpc4l\" (UniqueName: \"kubernetes.io/projected/c374a6b9-31c3-45f7-a188-ec0dc5df244d-kube-api-access-xpc4l\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.691246 4982 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.691262 4982 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.691276 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.691288 4982 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.691303 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c374a6b9-31c3-45f7-a188-ec0dc5df244d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.903558 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" event={"ID":"c374a6b9-31c3-45f7-a188-ec0dc5df244d","Type":"ContainerDied","Data":"8a4d14a2ca20065bcbb9eac23192c5f40e02695ce4556148d81786922a09666c"} Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.903608 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4d14a2ca20065bcbb9eac23192c5f40e02695ce4556148d81786922a09666c" Feb 24 15:29:58 crc kubenswrapper[4982]: I0224 15:29:58.903606 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.019811 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx"] Feb 24 15:29:59 crc kubenswrapper[4982]: E0224 15:29:59.020400 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c374a6b9-31c3-45f7-a188-ec0dc5df244d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.020422 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c374a6b9-31c3-45f7-a188-ec0dc5df244d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.020633 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c374a6b9-31c3-45f7-a188-ec0dc5df244d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.021431 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.023257 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.024050 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.024078 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.052584 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.053084 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.070043 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx"] Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.204753 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.204842 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.204937 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4k2g\" (UniqueName: \"kubernetes.io/projected/1b824030-74f0-4482-b022-6c9cc5e52aac-kube-api-access-h4k2g\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.205161 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.205678 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.308696 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.308774 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.308842 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.308914 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4k2g\" (UniqueName: \"kubernetes.io/projected/1b824030-74f0-4482-b022-6c9cc5e52aac-kube-api-access-h4k2g\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.309062 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.311548 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.311630 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.311909 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.313737 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.323187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.324173 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.324324 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.326645 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4k2g\" (UniqueName: \"kubernetes.io/projected/1b824030-74f0-4482-b022-6c9cc5e52aac-kube-api-access-h4k2g\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.372702 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.381953 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:29:59 crc kubenswrapper[4982]: I0224 15:29:59.957907 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx"] Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.148037 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532450-4blkh"] Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.149804 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532450-4blkh" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.152865 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.153360 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.153671 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.162045 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532450-4blkh"] Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.233010 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbpzq\" (UniqueName: \"kubernetes.io/projected/bc5293e6-7402-4181-89fc-27675b8d43d8-kube-api-access-qbpzq\") pod \"auto-csr-approver-29532450-4blkh\" (UID: \"bc5293e6-7402-4181-89fc-27675b8d43d8\") " pod="openshift-infra/auto-csr-approver-29532450-4blkh" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.243753 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f"] Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.245647 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.249019 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.249036 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.260932 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f"] Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.334580 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-secret-volume\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.334703 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbpzq\" (UniqueName: \"kubernetes.io/projected/bc5293e6-7402-4181-89fc-27675b8d43d8-kube-api-access-qbpzq\") pod \"auto-csr-approver-29532450-4blkh\" (UID: \"bc5293e6-7402-4181-89fc-27675b8d43d8\") " pod="openshift-infra/auto-csr-approver-29532450-4blkh" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.334794 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-config-volume\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.335046 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8xh9\" (UniqueName: \"kubernetes.io/projected/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-kube-api-access-h8xh9\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.354017 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbpzq\" (UniqueName: \"kubernetes.io/projected/bc5293e6-7402-4181-89fc-27675b8d43d8-kube-api-access-qbpzq\") pod \"auto-csr-approver-29532450-4blkh\" (UID: \"bc5293e6-7402-4181-89fc-27675b8d43d8\") " pod="openshift-infra/auto-csr-approver-29532450-4blkh" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.439207 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-secret-volume\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.440551 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-config-volume\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.440607 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-config-volume\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.441144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8xh9\" (UniqueName: \"kubernetes.io/projected/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-kube-api-access-h8xh9\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.444607 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-secret-volume\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.446264 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.459114 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8xh9\" (UniqueName: \"kubernetes.io/projected/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-kube-api-access-h8xh9\") pod \"collect-profiles-29532450-4w99f\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.498742 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532450-4blkh" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.569825 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.939878 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" event={"ID":"1b824030-74f0-4482-b022-6c9cc5e52aac","Type":"ContainerStarted","Data":"4a455f1e218b9f50eee29bc8fa0ec2ab9047f2572ce0f4d662035fa75e9576e7"} Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.940522 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" event={"ID":"1b824030-74f0-4482-b022-6c9cc5e52aac","Type":"ContainerStarted","Data":"3413ef3587fe0867e3a639e18e4c890b9dc3f344cfb5110d03681c37f72e0a9d"} Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.966286 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" podStartSLOduration=2.483836672 podStartE2EDuration="2.966267238s" podCreationTimestamp="2026-02-24 15:29:58 +0000 UTC" firstStartedPulling="2026-02-24 15:29:59.961309521 +0000 UTC m=+2461.580368014" lastFinishedPulling="2026-02-24 15:30:00.443740077 +0000 UTC m=+2462.062798580" observedRunningTime="2026-02-24 15:30:00.95532035 +0000 UTC m=+2462.574378843" watchObservedRunningTime="2026-02-24 15:30:00.966267238 +0000 UTC m=+2462.585325731" Feb 24 15:30:00 crc kubenswrapper[4982]: W0224 15:30:00.995192 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc5293e6_7402_4181_89fc_27675b8d43d8.slice/crio-14032466c88f946430a04a41b3dffb53c390c93bf86f79b3075ee0a94a49f88b WatchSource:0}: Error finding container 14032466c88f946430a04a41b3dffb53c390c93bf86f79b3075ee0a94a49f88b: Status 404 returned error can't find the container with id 14032466c88f946430a04a41b3dffb53c390c93bf86f79b3075ee0a94a49f88b Feb 24 15:30:00 crc kubenswrapper[4982]: I0224 15:30:00.997479 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532450-4blkh"] Feb 24 15:30:01 crc kubenswrapper[4982]: I0224 15:30:01.112326 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f"] Feb 24 15:30:01 crc kubenswrapper[4982]: W0224 15:30:01.120523 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66fbf6b0_80fa_45ad_85dd_a4e151f6a6e4.slice/crio-5c88c4bffcea27eef06432791f8fde34733c9df46bbfab7a41509b394ff36b1e WatchSource:0}: Error finding container 5c88c4bffcea27eef06432791f8fde34733c9df46bbfab7a41509b394ff36b1e: Status 404 returned error can't find the container with id 5c88c4bffcea27eef06432791f8fde34733c9df46bbfab7a41509b394ff36b1e Feb 24 15:30:01 crc kubenswrapper[4982]: I0224 15:30:01.968304 4982 generic.go:334] "Generic (PLEG): container finished" podID="66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4" containerID="a286607d622ec39765b7fe4d0832be9cbb36d19cfdae677171ad8097add1c70a" exitCode=0 Feb 24 15:30:01 crc kubenswrapper[4982]: I0224 15:30:01.968996 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" event={"ID":"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4","Type":"ContainerDied","Data":"a286607d622ec39765b7fe4d0832be9cbb36d19cfdae677171ad8097add1c70a"} Feb 24 15:30:01 crc kubenswrapper[4982]: I0224 15:30:01.969074 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" event={"ID":"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4","Type":"ContainerStarted","Data":"5c88c4bffcea27eef06432791f8fde34733c9df46bbfab7a41509b394ff36b1e"} Feb 24 15:30:01 crc kubenswrapper[4982]: I0224 15:30:01.972092 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532450-4blkh" event={"ID":"bc5293e6-7402-4181-89fc-27675b8d43d8","Type":"ContainerStarted","Data":"14032466c88f946430a04a41b3dffb53c390c93bf86f79b3075ee0a94a49f88b"} Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.110526 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-btcwf"] Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.114172 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.131204 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btcwf"] Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.195517 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbkhf\" (UniqueName: \"kubernetes.io/projected/ab81acba-043d-40c8-ae58-e3cc47565efe-kube-api-access-qbkhf\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.195941 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-catalog-content\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.196352 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-utilities\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.298641 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-utilities\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.299079 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbkhf\" (UniqueName: \"kubernetes.io/projected/ab81acba-043d-40c8-ae58-e3cc47565efe-kube-api-access-qbkhf\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.299171 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-utilities\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.299399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-catalog-content\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.299640 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-catalog-content\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.318911 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbkhf\" (UniqueName: \"kubernetes.io/projected/ab81acba-043d-40c8-ae58-e3cc47565efe-kube-api-access-qbkhf\") pod \"certified-operators-btcwf\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:02 crc kubenswrapper[4982]: I0224 15:30:02.448990 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.008326 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532450-4blkh" event={"ID":"bc5293e6-7402-4181-89fc-27675b8d43d8","Type":"ContainerStarted","Data":"5fafeb85b407f194c277719ce9d55e44aba017076066a63df79e27ad6f327637"} Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.079773 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532450-4blkh" podStartSLOduration=1.558300539 podStartE2EDuration="3.079754004s" podCreationTimestamp="2026-02-24 15:30:00 +0000 UTC" firstStartedPulling="2026-02-24 15:30:00.99816321 +0000 UTC m=+2462.617221703" lastFinishedPulling="2026-02-24 15:30:02.519616675 +0000 UTC m=+2464.138675168" observedRunningTime="2026-02-24 15:30:03.021277176 +0000 UTC m=+2464.640335669" watchObservedRunningTime="2026-02-24 15:30:03.079754004 +0000 UTC m=+2464.698812497" Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.123274 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btcwf"] Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.672990 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.764207 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-secret-volume\") pod \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.764316 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-config-volume\") pod \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.764510 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8xh9\" (UniqueName: \"kubernetes.io/projected/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-kube-api-access-h8xh9\") pod \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\" (UID: \"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4\") " Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.766707 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-config-volume" (OuterVolumeSpecName: "config-volume") pod "66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4" (UID: "66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.779259 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-kube-api-access-h8xh9" (OuterVolumeSpecName: "kube-api-access-h8xh9") pod "66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4" (UID: "66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4"). InnerVolumeSpecName "kube-api-access-h8xh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.781569 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4" (UID: "66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.867482 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.867533 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8xh9\" (UniqueName: \"kubernetes.io/projected/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-kube-api-access-h8xh9\") on node \"crc\" DevicePath \"\"" Feb 24 15:30:03 crc kubenswrapper[4982]: I0224 15:30:03.867548 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.020598 4982 generic.go:334] "Generic (PLEG): container finished" podID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerID="baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c" exitCode=0 Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.020701 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btcwf" event={"ID":"ab81acba-043d-40c8-ae58-e3cc47565efe","Type":"ContainerDied","Data":"baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c"} Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.021082 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btcwf" event={"ID":"ab81acba-043d-40c8-ae58-e3cc47565efe","Type":"ContainerStarted","Data":"0edc0ec7109372c4eba3075e231dc66ea0e34a1ec24a95046e55f5dd06cc5bf8"} Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.030910 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.031893 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f" event={"ID":"66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4","Type":"ContainerDied","Data":"5c88c4bffcea27eef06432791f8fde34733c9df46bbfab7a41509b394ff36b1e"} Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.031954 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c88c4bffcea27eef06432791f8fde34733c9df46bbfab7a41509b394ff36b1e" Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.036398 4982 generic.go:334] "Generic (PLEG): container finished" podID="bc5293e6-7402-4181-89fc-27675b8d43d8" containerID="5fafeb85b407f194c277719ce9d55e44aba017076066a63df79e27ad6f327637" exitCode=0 Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.036450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532450-4blkh" event={"ID":"bc5293e6-7402-4181-89fc-27675b8d43d8","Type":"ContainerDied","Data":"5fafeb85b407f194c277719ce9d55e44aba017076066a63df79e27ad6f327637"} Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.755249 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp"] Feb 24 15:30:04 crc kubenswrapper[4982]: I0224 15:30:04.769734 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532405-xmtjp"] Feb 24 15:30:05 crc kubenswrapper[4982]: I0224 15:30:05.173890 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e396c6a-3142-4ff9-ad15-ed0f7bbe1080" path="/var/lib/kubelet/pods/2e396c6a-3142-4ff9-ad15-ed0f7bbe1080/volumes" Feb 24 15:30:05 crc kubenswrapper[4982]: I0224 15:30:05.518971 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532450-4blkh" Feb 24 15:30:05 crc kubenswrapper[4982]: I0224 15:30:05.529563 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbpzq\" (UniqueName: \"kubernetes.io/projected/bc5293e6-7402-4181-89fc-27675b8d43d8-kube-api-access-qbpzq\") pod \"bc5293e6-7402-4181-89fc-27675b8d43d8\" (UID: \"bc5293e6-7402-4181-89fc-27675b8d43d8\") " Feb 24 15:30:05 crc kubenswrapper[4982]: I0224 15:30:05.543947 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5293e6-7402-4181-89fc-27675b8d43d8-kube-api-access-qbpzq" (OuterVolumeSpecName: "kube-api-access-qbpzq") pod "bc5293e6-7402-4181-89fc-27675b8d43d8" (UID: "bc5293e6-7402-4181-89fc-27675b8d43d8"). InnerVolumeSpecName "kube-api-access-qbpzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:30:05 crc kubenswrapper[4982]: I0224 15:30:05.633888 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbpzq\" (UniqueName: \"kubernetes.io/projected/bc5293e6-7402-4181-89fc-27675b8d43d8-kube-api-access-qbpzq\") on node \"crc\" DevicePath \"\"" Feb 24 15:30:06 crc kubenswrapper[4982]: I0224 15:30:06.064691 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532450-4blkh" event={"ID":"bc5293e6-7402-4181-89fc-27675b8d43d8","Type":"ContainerDied","Data":"14032466c88f946430a04a41b3dffb53c390c93bf86f79b3075ee0a94a49f88b"} Feb 24 15:30:06 crc kubenswrapper[4982]: I0224 15:30:06.064741 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532450-4blkh" Feb 24 15:30:06 crc kubenswrapper[4982]: I0224 15:30:06.064744 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14032466c88f946430a04a41b3dffb53c390c93bf86f79b3075ee0a94a49f88b" Feb 24 15:30:06 crc kubenswrapper[4982]: I0224 15:30:06.100750 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532444-szw64"] Feb 24 15:30:06 crc kubenswrapper[4982]: I0224 15:30:06.118363 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532444-szw64"] Feb 24 15:30:07 crc kubenswrapper[4982]: I0224 15:30:07.078943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btcwf" event={"ID":"ab81acba-043d-40c8-ae58-e3cc47565efe","Type":"ContainerStarted","Data":"ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd"} Feb 24 15:30:07 crc kubenswrapper[4982]: I0224 15:30:07.163601 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d9e5d0-53d6-456c-a8df-4ca258da91d4" path="/var/lib/kubelet/pods/96d9e5d0-53d6-456c-a8df-4ca258da91d4/volumes" Feb 24 15:30:08 crc kubenswrapper[4982]: I0224 15:30:08.741366 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:30:08 crc kubenswrapper[4982]: I0224 15:30:08.741426 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:30:08 crc kubenswrapper[4982]: I0224 15:30:08.741476 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:30:08 crc kubenswrapper[4982]: I0224 15:30:08.742533 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:30:08 crc kubenswrapper[4982]: I0224 15:30:08.742585 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" gracePeriod=600 Feb 24 15:30:09 crc kubenswrapper[4982]: E0224 15:30:09.978944 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:30:10 crc kubenswrapper[4982]: I0224 15:30:10.127856 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" exitCode=0 Feb 24 15:30:10 crc kubenswrapper[4982]: I0224 15:30:10.127935 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c"} Feb 24 15:30:10 crc kubenswrapper[4982]: I0224 15:30:10.128318 4982 scope.go:117] "RemoveContainer" containerID="ec6240805dd5fe25be3a90815c44ef652ece8f5ebedbe9f3e922c92054ae3159" Feb 24 15:30:10 crc kubenswrapper[4982]: I0224 15:30:10.129279 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:30:10 crc kubenswrapper[4982]: E0224 15:30:10.129676 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:30:12 crc kubenswrapper[4982]: I0224 15:30:12.176043 4982 generic.go:334] "Generic (PLEG): container finished" podID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerID="ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd" exitCode=0 Feb 24 15:30:12 crc kubenswrapper[4982]: I0224 15:30:12.176057 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btcwf" event={"ID":"ab81acba-043d-40c8-ae58-e3cc47565efe","Type":"ContainerDied","Data":"ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd"} Feb 24 15:30:13 crc kubenswrapper[4982]: I0224 15:30:13.190797 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btcwf" event={"ID":"ab81acba-043d-40c8-ae58-e3cc47565efe","Type":"ContainerStarted","Data":"3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429"} Feb 24 15:30:13 crc kubenswrapper[4982]: I0224 15:30:13.218389 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-btcwf" podStartSLOduration=2.672542567 podStartE2EDuration="11.218368265s" podCreationTimestamp="2026-02-24 15:30:02 +0000 UTC" firstStartedPulling="2026-02-24 15:30:04.029835823 +0000 UTC m=+2465.648894316" lastFinishedPulling="2026-02-24 15:30:12.575661511 +0000 UTC m=+2474.194720014" observedRunningTime="2026-02-24 15:30:13.208793333 +0000 UTC m=+2474.827851836" watchObservedRunningTime="2026-02-24 15:30:13.218368265 +0000 UTC m=+2474.837426758" Feb 24 15:30:22 crc kubenswrapper[4982]: I0224 15:30:22.450058 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:22 crc kubenswrapper[4982]: I0224 15:30:22.450702 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:22 crc kubenswrapper[4982]: I0224 15:30:22.516360 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:23 crc kubenswrapper[4982]: I0224 15:30:23.353802 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:23 crc kubenswrapper[4982]: I0224 15:30:23.420584 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-btcwf"] Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.146207 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:30:25 crc kubenswrapper[4982]: E0224 15:30:25.147006 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.337860 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-btcwf" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerName="registry-server" containerID="cri-o://3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429" gracePeriod=2 Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.863999 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.976575 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-utilities\") pod \"ab81acba-043d-40c8-ae58-e3cc47565efe\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.977777 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-utilities" (OuterVolumeSpecName: "utilities") pod "ab81acba-043d-40c8-ae58-e3cc47565efe" (UID: "ab81acba-043d-40c8-ae58-e3cc47565efe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.978112 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-catalog-content\") pod \"ab81acba-043d-40c8-ae58-e3cc47565efe\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.978172 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbkhf\" (UniqueName: \"kubernetes.io/projected/ab81acba-043d-40c8-ae58-e3cc47565efe-kube-api-access-qbkhf\") pod \"ab81acba-043d-40c8-ae58-e3cc47565efe\" (UID: \"ab81acba-043d-40c8-ae58-e3cc47565efe\") " Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.979830 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:30:25 crc kubenswrapper[4982]: I0224 15:30:25.984721 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab81acba-043d-40c8-ae58-e3cc47565efe-kube-api-access-qbkhf" (OuterVolumeSpecName: "kube-api-access-qbkhf") pod "ab81acba-043d-40c8-ae58-e3cc47565efe" (UID: "ab81acba-043d-40c8-ae58-e3cc47565efe"). InnerVolumeSpecName "kube-api-access-qbkhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.029468 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab81acba-043d-40c8-ae58-e3cc47565efe" (UID: "ab81acba-043d-40c8-ae58-e3cc47565efe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.082698 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbkhf\" (UniqueName: \"kubernetes.io/projected/ab81acba-043d-40c8-ae58-e3cc47565efe-kube-api-access-qbkhf\") on node \"crc\" DevicePath \"\"" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.083024 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab81acba-043d-40c8-ae58-e3cc47565efe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.355489 4982 generic.go:334] "Generic (PLEG): container finished" podID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerID="3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429" exitCode=0 Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.355792 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btcwf" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.355836 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btcwf" event={"ID":"ab81acba-043d-40c8-ae58-e3cc47565efe","Type":"ContainerDied","Data":"3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429"} Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.356442 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btcwf" event={"ID":"ab81acba-043d-40c8-ae58-e3cc47565efe","Type":"ContainerDied","Data":"0edc0ec7109372c4eba3075e231dc66ea0e34a1ec24a95046e55f5dd06cc5bf8"} Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.356490 4982 scope.go:117] "RemoveContainer" containerID="3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.382291 4982 scope.go:117] "RemoveContainer" containerID="ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.441967 4982 scope.go:117] "RemoveContainer" containerID="baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.449534 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-btcwf"] Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.478411 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-btcwf"] Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.494066 4982 scope.go:117] "RemoveContainer" containerID="3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429" Feb 24 15:30:26 crc kubenswrapper[4982]: E0224 15:30:26.495777 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429\": container with ID starting with 3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429 not found: ID does not exist" containerID="3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.495877 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429"} err="failed to get container status \"3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429\": rpc error: code = NotFound desc = could not find container \"3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429\": container with ID starting with 3bc67c71879cc64042508490bd275ca5fb9eac4287d76aeae75a05f7681f8429 not found: ID does not exist" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.495931 4982 scope.go:117] "RemoveContainer" containerID="ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd" Feb 24 15:30:26 crc kubenswrapper[4982]: E0224 15:30:26.496473 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd\": container with ID starting with ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd not found: ID does not exist" containerID="ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.496529 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd"} err="failed to get container status \"ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd\": rpc error: code = NotFound desc = could not find container \"ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd\": container with ID starting with ef1edafc24d751a36ce30df0736a015a273e2b0b5e525019a04a5b0cdde34ebd not found: ID does not exist" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.496562 4982 scope.go:117] "RemoveContainer" containerID="baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c" Feb 24 15:30:26 crc kubenswrapper[4982]: E0224 15:30:26.496831 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c\": container with ID starting with baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c not found: ID does not exist" containerID="baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c" Feb 24 15:30:26 crc kubenswrapper[4982]: I0224 15:30:26.496847 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c"} err="failed to get container status \"baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c\": rpc error: code = NotFound desc = could not find container \"baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c\": container with ID starting with baae4ad5086fe888ec71d87af7b277cdfb9b81029ca0da2dbd10c7e3a32ea06c not found: ID does not exist" Feb 24 15:30:27 crc kubenswrapper[4982]: I0224 15:30:27.162845 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" path="/var/lib/kubelet/pods/ab81acba-043d-40c8-ae58-e3cc47565efe/volumes" Feb 24 15:30:38 crc kubenswrapper[4982]: I0224 15:30:38.147042 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:30:38 crc kubenswrapper[4982]: E0224 15:30:38.148057 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.803186 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mn8mw"] Feb 24 15:30:45 crc kubenswrapper[4982]: E0224 15:30:45.804336 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerName="registry-server" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.804351 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerName="registry-server" Feb 24 15:30:45 crc kubenswrapper[4982]: E0224 15:30:45.804392 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4" containerName="collect-profiles" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.804400 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4" containerName="collect-profiles" Feb 24 15:30:45 crc kubenswrapper[4982]: E0224 15:30:45.804429 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5293e6-7402-4181-89fc-27675b8d43d8" containerName="oc" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.804438 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5293e6-7402-4181-89fc-27675b8d43d8" containerName="oc" Feb 24 15:30:45 crc kubenswrapper[4982]: E0224 15:30:45.804450 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerName="extract-utilities" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.804458 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerName="extract-utilities" Feb 24 15:30:45 crc kubenswrapper[4982]: E0224 15:30:45.804492 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerName="extract-content" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.804610 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerName="extract-content" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.804954 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5293e6-7402-4181-89fc-27675b8d43d8" containerName="oc" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.805000 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4" containerName="collect-profiles" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.805014 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab81acba-043d-40c8-ae58-e3cc47565efe" containerName="registry-server" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.807929 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.820638 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mn8mw"] Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.917660 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-catalog-content\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.917906 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctbk\" (UniqueName: \"kubernetes.io/projected/62d53640-fdb5-4c8e-aada-050f8ea1d802-kube-api-access-mctbk\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:45 crc kubenswrapper[4982]: I0224 15:30:45.918146 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-utilities\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:46 crc kubenswrapper[4982]: I0224 15:30:46.020575 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-catalog-content\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:46 crc kubenswrapper[4982]: I0224 15:30:46.020656 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mctbk\" (UniqueName: \"kubernetes.io/projected/62d53640-fdb5-4c8e-aada-050f8ea1d802-kube-api-access-mctbk\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:46 crc kubenswrapper[4982]: I0224 15:30:46.020724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-utilities\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:46 crc kubenswrapper[4982]: I0224 15:30:46.021142 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-catalog-content\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:46 crc kubenswrapper[4982]: I0224 15:30:46.021171 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-utilities\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:46 crc kubenswrapper[4982]: I0224 15:30:46.046101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mctbk\" (UniqueName: \"kubernetes.io/projected/62d53640-fdb5-4c8e-aada-050f8ea1d802-kube-api-access-mctbk\") pod \"redhat-operators-mn8mw\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:46 crc kubenswrapper[4982]: I0224 15:30:46.136845 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:46 crc kubenswrapper[4982]: I0224 15:30:46.634712 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mn8mw"] Feb 24 15:30:47 crc kubenswrapper[4982]: I0224 15:30:47.611545 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8mw" event={"ID":"62d53640-fdb5-4c8e-aada-050f8ea1d802","Type":"ContainerDied","Data":"97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1"} Feb 24 15:30:47 crc kubenswrapper[4982]: I0224 15:30:47.611494 4982 generic.go:334] "Generic (PLEG): container finished" podID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerID="97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1" exitCode=0 Feb 24 15:30:47 crc kubenswrapper[4982]: I0224 15:30:47.612005 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8mw" event={"ID":"62d53640-fdb5-4c8e-aada-050f8ea1d802","Type":"ContainerStarted","Data":"21ccb291cee1bb2876f3c9782ec00f9c0650f60ff86beb28339fb246a9520999"} Feb 24 15:30:47 crc kubenswrapper[4982]: I0224 15:30:47.727905 4982 scope.go:117] "RemoveContainer" containerID="7ceb18a9bf81e8ba0c93dc02318c0f9940411705363a232a5042fc3392ba4e39" Feb 24 15:30:47 crc kubenswrapper[4982]: I0224 15:30:47.785312 4982 scope.go:117] "RemoveContainer" containerID="ec5b8a65e094ab700ff24c9c4b7c02590ca4b6396b7013c10ca299a9251bc21e" Feb 24 15:30:49 crc kubenswrapper[4982]: I0224 15:30:49.640536 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8mw" event={"ID":"62d53640-fdb5-4c8e-aada-050f8ea1d802","Type":"ContainerStarted","Data":"82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072"} Feb 24 15:30:51 crc kubenswrapper[4982]: I0224 15:30:51.146031 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:30:51 crc kubenswrapper[4982]: E0224 15:30:51.146655 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:30:53 crc kubenswrapper[4982]: I0224 15:30:53.701662 4982 generic.go:334] "Generic (PLEG): container finished" podID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerID="82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072" exitCode=0 Feb 24 15:30:53 crc kubenswrapper[4982]: I0224 15:30:53.701762 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8mw" event={"ID":"62d53640-fdb5-4c8e-aada-050f8ea1d802","Type":"ContainerDied","Data":"82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072"} Feb 24 15:30:54 crc kubenswrapper[4982]: I0224 15:30:54.721365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8mw" event={"ID":"62d53640-fdb5-4c8e-aada-050f8ea1d802","Type":"ContainerStarted","Data":"d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd"} Feb 24 15:30:54 crc kubenswrapper[4982]: I0224 15:30:54.761488 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mn8mw" podStartSLOduration=3.28404364 podStartE2EDuration="9.761463354s" podCreationTimestamp="2026-02-24 15:30:45 +0000 UTC" firstStartedPulling="2026-02-24 15:30:47.614980866 +0000 UTC m=+2509.234039399" lastFinishedPulling="2026-02-24 15:30:54.09240062 +0000 UTC m=+2515.711459113" observedRunningTime="2026-02-24 15:30:54.743279988 +0000 UTC m=+2516.362338501" watchObservedRunningTime="2026-02-24 15:30:54.761463354 +0000 UTC m=+2516.380521867" Feb 24 15:30:56 crc kubenswrapper[4982]: I0224 15:30:56.136924 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:56 crc kubenswrapper[4982]: I0224 15:30:56.137246 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:30:57 crc kubenswrapper[4982]: I0224 15:30:57.187675 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mn8mw" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="registry-server" probeResult="failure" output=< Feb 24 15:30:57 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:30:57 crc kubenswrapper[4982]: > Feb 24 15:31:05 crc kubenswrapper[4982]: I0224 15:31:05.145866 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:31:05 crc kubenswrapper[4982]: E0224 15:31:05.146855 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:31:07 crc kubenswrapper[4982]: I0224 15:31:07.190234 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mn8mw" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="registry-server" probeResult="failure" output=< Feb 24 15:31:07 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:31:07 crc kubenswrapper[4982]: > Feb 24 15:31:16 crc kubenswrapper[4982]: I0224 15:31:16.195339 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:31:16 crc kubenswrapper[4982]: I0224 15:31:16.260088 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:31:16 crc kubenswrapper[4982]: I0224 15:31:16.999006 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mn8mw"] Feb 24 15:31:17 crc kubenswrapper[4982]: I0224 15:31:17.145612 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:31:17 crc kubenswrapper[4982]: E0224 15:31:17.146099 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.015104 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mn8mw" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="registry-server" containerID="cri-o://d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd" gracePeriod=2 Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.553008 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.661857 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-utilities\") pod \"62d53640-fdb5-4c8e-aada-050f8ea1d802\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.662017 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mctbk\" (UniqueName: \"kubernetes.io/projected/62d53640-fdb5-4c8e-aada-050f8ea1d802-kube-api-access-mctbk\") pod \"62d53640-fdb5-4c8e-aada-050f8ea1d802\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.662192 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-catalog-content\") pod \"62d53640-fdb5-4c8e-aada-050f8ea1d802\" (UID: \"62d53640-fdb5-4c8e-aada-050f8ea1d802\") " Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.662752 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-utilities" (OuterVolumeSpecName: "utilities") pod "62d53640-fdb5-4c8e-aada-050f8ea1d802" (UID: "62d53640-fdb5-4c8e-aada-050f8ea1d802"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.663372 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.670056 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d53640-fdb5-4c8e-aada-050f8ea1d802-kube-api-access-mctbk" (OuterVolumeSpecName: "kube-api-access-mctbk") pod "62d53640-fdb5-4c8e-aada-050f8ea1d802" (UID: "62d53640-fdb5-4c8e-aada-050f8ea1d802"). InnerVolumeSpecName "kube-api-access-mctbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.765586 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mctbk\" (UniqueName: \"kubernetes.io/projected/62d53640-fdb5-4c8e-aada-050f8ea1d802-kube-api-access-mctbk\") on node \"crc\" DevicePath \"\"" Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.795158 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62d53640-fdb5-4c8e-aada-050f8ea1d802" (UID: "62d53640-fdb5-4c8e-aada-050f8ea1d802"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:31:18 crc kubenswrapper[4982]: I0224 15:31:18.867785 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62d53640-fdb5-4c8e-aada-050f8ea1d802-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.031266 4982 generic.go:334] "Generic (PLEG): container finished" podID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerID="d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd" exitCode=0 Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.031339 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8mw" event={"ID":"62d53640-fdb5-4c8e-aada-050f8ea1d802","Type":"ContainerDied","Data":"d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd"} Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.031345 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn8mw" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.031365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn8mw" event={"ID":"62d53640-fdb5-4c8e-aada-050f8ea1d802","Type":"ContainerDied","Data":"21ccb291cee1bb2876f3c9782ec00f9c0650f60ff86beb28339fb246a9520999"} Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.031382 4982 scope.go:117] "RemoveContainer" containerID="d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.074834 4982 scope.go:117] "RemoveContainer" containerID="82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.109855 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mn8mw"] Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.137289 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mn8mw"] Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.139178 4982 scope.go:117] "RemoveContainer" containerID="97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.247702 4982 scope.go:117] "RemoveContainer" containerID="d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.248827 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" path="/var/lib/kubelet/pods/62d53640-fdb5-4c8e-aada-050f8ea1d802/volumes" Feb 24 15:31:19 crc kubenswrapper[4982]: E0224 15:31:19.284649 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd\": container with ID starting with d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd not found: ID does not exist" containerID="d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.284691 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd"} err="failed to get container status \"d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd\": rpc error: code = NotFound desc = could not find container \"d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd\": container with ID starting with d21c80a9c125237ca4f0108a96b722c1b405b02a5cca387331b1d5324a89addd not found: ID does not exist" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.284715 4982 scope.go:117] "RemoveContainer" containerID="82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072" Feb 24 15:31:19 crc kubenswrapper[4982]: E0224 15:31:19.286317 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072\": container with ID starting with 82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072 not found: ID does not exist" containerID="82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.286340 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072"} err="failed to get container status \"82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072\": rpc error: code = NotFound desc = could not find container \"82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072\": container with ID starting with 82bd7671bb04a1c81e9b51a96c8cd1d7ac99f05f87fb7455eb92302e91b14072 not found: ID does not exist" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.286355 4982 scope.go:117] "RemoveContainer" containerID="97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1" Feb 24 15:31:19 crc kubenswrapper[4982]: E0224 15:31:19.290609 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1\": container with ID starting with 97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1 not found: ID does not exist" containerID="97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1" Feb 24 15:31:19 crc kubenswrapper[4982]: I0224 15:31:19.290874 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1"} err="failed to get container status \"97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1\": rpc error: code = NotFound desc = could not find container \"97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1\": container with ID starting with 97d69bde58993700986285b40677b5c68dd3b634a0d28d12d65bfd47acf057f1 not found: ID does not exist" Feb 24 15:31:32 crc kubenswrapper[4982]: I0224 15:31:32.146098 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:31:32 crc kubenswrapper[4982]: E0224 15:31:32.147096 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:31:44 crc kubenswrapper[4982]: I0224 15:31:44.146101 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:31:44 crc kubenswrapper[4982]: E0224 15:31:44.147017 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:31:57 crc kubenswrapper[4982]: I0224 15:31:57.146168 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:31:57 crc kubenswrapper[4982]: E0224 15:31:57.146954 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.164368 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532452-4j6xr"] Feb 24 15:32:00 crc kubenswrapper[4982]: E0224 15:32:00.165220 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="extract-content" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.165233 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="extract-content" Feb 24 15:32:00 crc kubenswrapper[4982]: E0224 15:32:00.165268 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="registry-server" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.165274 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="registry-server" Feb 24 15:32:00 crc kubenswrapper[4982]: E0224 15:32:00.165288 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="extract-utilities" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.165294 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="extract-utilities" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.165520 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d53640-fdb5-4c8e-aada-050f8ea1d802" containerName="registry-server" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.166336 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.168248 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.169046 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.169810 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.187156 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532452-4j6xr"] Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.267577 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5q9c\" (UniqueName: \"kubernetes.io/projected/84f60780-22b1-481b-aa05-c11f94ce37bb-kube-api-access-s5q9c\") pod \"auto-csr-approver-29532452-4j6xr\" (UID: \"84f60780-22b1-481b-aa05-c11f94ce37bb\") " pod="openshift-infra/auto-csr-approver-29532452-4j6xr" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.370955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5q9c\" (UniqueName: \"kubernetes.io/projected/84f60780-22b1-481b-aa05-c11f94ce37bb-kube-api-access-s5q9c\") pod \"auto-csr-approver-29532452-4j6xr\" (UID: \"84f60780-22b1-481b-aa05-c11f94ce37bb\") " pod="openshift-infra/auto-csr-approver-29532452-4j6xr" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.400376 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5q9c\" (UniqueName: \"kubernetes.io/projected/84f60780-22b1-481b-aa05-c11f94ce37bb-kube-api-access-s5q9c\") pod \"auto-csr-approver-29532452-4j6xr\" (UID: \"84f60780-22b1-481b-aa05-c11f94ce37bb\") " pod="openshift-infra/auto-csr-approver-29532452-4j6xr" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.505112 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" Feb 24 15:32:00 crc kubenswrapper[4982]: I0224 15:32:00.998175 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532452-4j6xr"] Feb 24 15:32:01 crc kubenswrapper[4982]: I0224 15:32:01.613832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" event={"ID":"84f60780-22b1-481b-aa05-c11f94ce37bb","Type":"ContainerStarted","Data":"961753f640b22b69adb87ffde4203bbc8465e0094b157d59774ed1b033cda793"} Feb 24 15:32:02 crc kubenswrapper[4982]: I0224 15:32:02.631943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" event={"ID":"84f60780-22b1-481b-aa05-c11f94ce37bb","Type":"ContainerStarted","Data":"bda0dd437990578da7d8486043df11316b410327db23e220ceabf226e5c2d9f0"} Feb 24 15:32:02 crc kubenswrapper[4982]: I0224 15:32:02.675671 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" podStartSLOduration=1.684043296 podStartE2EDuration="2.675650206s" podCreationTimestamp="2026-02-24 15:32:00 +0000 UTC" firstStartedPulling="2026-02-24 15:32:01.005060736 +0000 UTC m=+2582.624119229" lastFinishedPulling="2026-02-24 15:32:01.996667606 +0000 UTC m=+2583.615726139" observedRunningTime="2026-02-24 15:32:02.656912426 +0000 UTC m=+2584.275970939" watchObservedRunningTime="2026-02-24 15:32:02.675650206 +0000 UTC m=+2584.294708719" Feb 24 15:32:03 crc kubenswrapper[4982]: I0224 15:32:03.655666 4982 generic.go:334] "Generic (PLEG): container finished" podID="84f60780-22b1-481b-aa05-c11f94ce37bb" containerID="bda0dd437990578da7d8486043df11316b410327db23e220ceabf226e5c2d9f0" exitCode=0 Feb 24 15:32:03 crc kubenswrapper[4982]: I0224 15:32:03.655756 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" event={"ID":"84f60780-22b1-481b-aa05-c11f94ce37bb","Type":"ContainerDied","Data":"bda0dd437990578da7d8486043df11316b410327db23e220ceabf226e5c2d9f0"} Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.111971 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.206627 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5q9c\" (UniqueName: \"kubernetes.io/projected/84f60780-22b1-481b-aa05-c11f94ce37bb-kube-api-access-s5q9c\") pod \"84f60780-22b1-481b-aa05-c11f94ce37bb\" (UID: \"84f60780-22b1-481b-aa05-c11f94ce37bb\") " Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.212832 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f60780-22b1-481b-aa05-c11f94ce37bb-kube-api-access-s5q9c" (OuterVolumeSpecName: "kube-api-access-s5q9c") pod "84f60780-22b1-481b-aa05-c11f94ce37bb" (UID: "84f60780-22b1-481b-aa05-c11f94ce37bb"). InnerVolumeSpecName "kube-api-access-s5q9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.310008 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5q9c\" (UniqueName: \"kubernetes.io/projected/84f60780-22b1-481b-aa05-c11f94ce37bb-kube-api-access-s5q9c\") on node \"crc\" DevicePath \"\"" Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.682764 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" event={"ID":"84f60780-22b1-481b-aa05-c11f94ce37bb","Type":"ContainerDied","Data":"961753f640b22b69adb87ffde4203bbc8465e0094b157d59774ed1b033cda793"} Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.682821 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="961753f640b22b69adb87ffde4203bbc8465e0094b157d59774ed1b033cda793" Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.682902 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532452-4j6xr" Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.729348 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532446-qjtj2"] Feb 24 15:32:05 crc kubenswrapper[4982]: I0224 15:32:05.741528 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532446-qjtj2"] Feb 24 15:32:07 crc kubenswrapper[4982]: I0224 15:32:07.164240 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9795c7-e101-4593-a414-c64dc4b42f83" path="/var/lib/kubelet/pods/4d9795c7-e101-4593-a414-c64dc4b42f83/volumes" Feb 24 15:32:11 crc kubenswrapper[4982]: I0224 15:32:11.146057 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:32:11 crc kubenswrapper[4982]: E0224 15:32:11.146919 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:32:23 crc kubenswrapper[4982]: I0224 15:32:23.146074 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:32:23 crc kubenswrapper[4982]: E0224 15:32:23.147164 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:32:38 crc kubenswrapper[4982]: I0224 15:32:38.146997 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:32:38 crc kubenswrapper[4982]: E0224 15:32:38.148455 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:32:47 crc kubenswrapper[4982]: I0224 15:32:47.962890 4982 scope.go:117] "RemoveContainer" containerID="5a7e79db896a013ac9438305259c410c7083af313c23228d517fef4ba91fa589" Feb 24 15:32:51 crc kubenswrapper[4982]: I0224 15:32:51.145417 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:32:51 crc kubenswrapper[4982]: E0224 15:32:51.146365 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:33:06 crc kubenswrapper[4982]: I0224 15:33:06.146139 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:33:06 crc kubenswrapper[4982]: E0224 15:33:06.147120 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:33:20 crc kubenswrapper[4982]: I0224 15:33:20.145540 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:33:20 crc kubenswrapper[4982]: E0224 15:33:20.146519 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:33:34 crc kubenswrapper[4982]: I0224 15:33:34.146749 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:33:34 crc kubenswrapper[4982]: E0224 15:33:34.148553 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:33:47 crc kubenswrapper[4982]: I0224 15:33:47.146948 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:33:47 crc kubenswrapper[4982]: E0224 15:33:47.148112 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:33:59 crc kubenswrapper[4982]: I0224 15:33:59.164430 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:33:59 crc kubenswrapper[4982]: E0224 15:33:59.165300 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:33:59 crc kubenswrapper[4982]: I0224 15:33:59.239727 4982 generic.go:334] "Generic (PLEG): container finished" podID="1b824030-74f0-4482-b022-6c9cc5e52aac" containerID="4a455f1e218b9f50eee29bc8fa0ec2ab9047f2572ce0f4d662035fa75e9576e7" exitCode=0 Feb 24 15:33:59 crc kubenswrapper[4982]: I0224 15:33:59.239772 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" event={"ID":"1b824030-74f0-4482-b022-6c9cc5e52aac","Type":"ContainerDied","Data":"4a455f1e218b9f50eee29bc8fa0ec2ab9047f2572ce0f4d662035fa75e9576e7"} Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.179360 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532454-28dfk"] Feb 24 15:34:00 crc kubenswrapper[4982]: E0224 15:34:00.180011 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f60780-22b1-481b-aa05-c11f94ce37bb" containerName="oc" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.180027 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f60780-22b1-481b-aa05-c11f94ce37bb" containerName="oc" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.180359 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f60780-22b1-481b-aa05-c11f94ce37bb" containerName="oc" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.181372 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532454-28dfk" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.185314 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.185646 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.185913 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.217686 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532454-28dfk"] Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.220253 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r98gg\" (UniqueName: \"kubernetes.io/projected/4189936c-4226-419e-b33c-ea0500c5cb45-kube-api-access-r98gg\") pod \"auto-csr-approver-29532454-28dfk\" (UID: \"4189936c-4226-419e-b33c-ea0500c5cb45\") " pod="openshift-infra/auto-csr-approver-29532454-28dfk" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.322956 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r98gg\" (UniqueName: \"kubernetes.io/projected/4189936c-4226-419e-b33c-ea0500c5cb45-kube-api-access-r98gg\") pod \"auto-csr-approver-29532454-28dfk\" (UID: \"4189936c-4226-419e-b33c-ea0500c5cb45\") " pod="openshift-infra/auto-csr-approver-29532454-28dfk" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.343429 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r98gg\" (UniqueName: \"kubernetes.io/projected/4189936c-4226-419e-b33c-ea0500c5cb45-kube-api-access-r98gg\") pod \"auto-csr-approver-29532454-28dfk\" (UID: \"4189936c-4226-419e-b33c-ea0500c5cb45\") " pod="openshift-infra/auto-csr-approver-29532454-28dfk" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.515660 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532454-28dfk" Feb 24 15:34:00 crc kubenswrapper[4982]: I0224 15:34:00.921596 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.037756 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-inventory\") pod \"1b824030-74f0-4482-b022-6c9cc5e52aac\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.037963 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-combined-ca-bundle\") pod \"1b824030-74f0-4482-b022-6c9cc5e52aac\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.038609 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4k2g\" (UniqueName: \"kubernetes.io/projected/1b824030-74f0-4482-b022-6c9cc5e52aac-kube-api-access-h4k2g\") pod \"1b824030-74f0-4482-b022-6c9cc5e52aac\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.038656 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-ssh-key-openstack-edpm-ipam\") pod \"1b824030-74f0-4482-b022-6c9cc5e52aac\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.038692 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-secret-0\") pod \"1b824030-74f0-4482-b022-6c9cc5e52aac\" (UID: \"1b824030-74f0-4482-b022-6c9cc5e52aac\") " Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.044103 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b824030-74f0-4482-b022-6c9cc5e52aac-kube-api-access-h4k2g" (OuterVolumeSpecName: "kube-api-access-h4k2g") pod "1b824030-74f0-4482-b022-6c9cc5e52aac" (UID: "1b824030-74f0-4482-b022-6c9cc5e52aac"). InnerVolumeSpecName "kube-api-access-h4k2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.044649 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1b824030-74f0-4482-b022-6c9cc5e52aac" (UID: "1b824030-74f0-4482-b022-6c9cc5e52aac"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.084058 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-inventory" (OuterVolumeSpecName: "inventory") pod "1b824030-74f0-4482-b022-6c9cc5e52aac" (UID: "1b824030-74f0-4482-b022-6c9cc5e52aac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.085039 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b824030-74f0-4482-b022-6c9cc5e52aac" (UID: "1b824030-74f0-4482-b022-6c9cc5e52aac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.095125 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532454-28dfk"] Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.114265 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1b824030-74f0-4482-b022-6c9cc5e52aac" (UID: "1b824030-74f0-4482-b022-6c9cc5e52aac"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.141642 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4k2g\" (UniqueName: \"kubernetes.io/projected/1b824030-74f0-4482-b022-6c9cc5e52aac-kube-api-access-h4k2g\") on node \"crc\" DevicePath \"\"" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.141667 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.141677 4982 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.141686 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.141695 4982 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b824030-74f0-4482-b022-6c9cc5e52aac-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.264431 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532454-28dfk" event={"ID":"4189936c-4226-419e-b33c-ea0500c5cb45","Type":"ContainerStarted","Data":"3f0b7dc71c6dae45a0cf0f5a8159a7fb27ed31154638589f9bd610dd10c6a1b3"} Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.268090 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" event={"ID":"1b824030-74f0-4482-b022-6c9cc5e52aac","Type":"ContainerDied","Data":"3413ef3587fe0867e3a639e18e4c890b9dc3f344cfb5110d03681c37f72e0a9d"} Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.268114 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3413ef3587fe0867e3a639e18e4c890b9dc3f344cfb5110d03681c37f72e0a9d" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.268183 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.381164 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt"] Feb 24 15:34:01 crc kubenswrapper[4982]: E0224 15:34:01.381859 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b824030-74f0-4482-b022-6c9cc5e52aac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.381879 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b824030-74f0-4482-b022-6c9cc5e52aac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.382131 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b824030-74f0-4482-b022-6c9cc5e52aac" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.383158 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.386995 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.387253 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.387365 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.387457 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.387557 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.387639 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.387786 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.401345 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt"] Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.554120 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.554168 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.554213 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.554251 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.555240 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.555380 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.555748 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.555829 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.556149 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.556196 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.556226 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz59s\" (UniqueName: \"kubernetes.io/projected/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-kube-api-access-bz59s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.658990 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659045 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659089 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659128 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659153 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659185 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659260 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659289 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659348 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659365 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz59s\" (UniqueName: \"kubernetes.io/projected/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-kube-api-access-bz59s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.659382 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.660218 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.665374 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.666812 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.666957 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.667605 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.667774 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.667976 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.668281 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.668309 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.669113 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.679917 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz59s\" (UniqueName: \"kubernetes.io/projected/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-kube-api-access-bz59s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qjzqt\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:01 crc kubenswrapper[4982]: I0224 15:34:01.738195 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:34:02 crc kubenswrapper[4982]: I0224 15:34:02.355439 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt"] Feb 24 15:34:03 crc kubenswrapper[4982]: I0224 15:34:03.291343 4982 generic.go:334] "Generic (PLEG): container finished" podID="4189936c-4226-419e-b33c-ea0500c5cb45" containerID="ed3d43cfffa6ba6bfff55034febca814d94b5f3a737ed14712acee947d05e217" exitCode=0 Feb 24 15:34:03 crc kubenswrapper[4982]: I0224 15:34:03.291838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532454-28dfk" event={"ID":"4189936c-4226-419e-b33c-ea0500c5cb45","Type":"ContainerDied","Data":"ed3d43cfffa6ba6bfff55034febca814d94b5f3a737ed14712acee947d05e217"} Feb 24 15:34:03 crc kubenswrapper[4982]: I0224 15:34:03.293891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" event={"ID":"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7","Type":"ContainerStarted","Data":"8be43e962d688ee2e449aee51f026181219c0c690a1ebfe0b394d3c4009ecfd4"} Feb 24 15:34:04 crc kubenswrapper[4982]: I0224 15:34:04.306390 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" event={"ID":"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7","Type":"ContainerStarted","Data":"e895940c0edc307dd9fd68106c658c6166e2a980ac8f097fc082736e41efac9d"} Feb 24 15:34:04 crc kubenswrapper[4982]: I0224 15:34:04.342382 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" podStartSLOduration=2.8705987139999998 podStartE2EDuration="3.342363133s" podCreationTimestamp="2026-02-24 15:34:01 +0000 UTC" firstStartedPulling="2026-02-24 15:34:02.396899859 +0000 UTC m=+2704.015958362" lastFinishedPulling="2026-02-24 15:34:02.868664258 +0000 UTC m=+2704.487722781" observedRunningTime="2026-02-24 15:34:04.337467289 +0000 UTC m=+2705.956525792" watchObservedRunningTime="2026-02-24 15:34:04.342363133 +0000 UTC m=+2705.961421616" Feb 24 15:34:04 crc kubenswrapper[4982]: I0224 15:34:04.696291 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532454-28dfk" Feb 24 15:34:04 crc kubenswrapper[4982]: I0224 15:34:04.838852 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r98gg\" (UniqueName: \"kubernetes.io/projected/4189936c-4226-419e-b33c-ea0500c5cb45-kube-api-access-r98gg\") pod \"4189936c-4226-419e-b33c-ea0500c5cb45\" (UID: \"4189936c-4226-419e-b33c-ea0500c5cb45\") " Feb 24 15:34:04 crc kubenswrapper[4982]: I0224 15:34:04.858064 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4189936c-4226-419e-b33c-ea0500c5cb45-kube-api-access-r98gg" (OuterVolumeSpecName: "kube-api-access-r98gg") pod "4189936c-4226-419e-b33c-ea0500c5cb45" (UID: "4189936c-4226-419e-b33c-ea0500c5cb45"). InnerVolumeSpecName "kube-api-access-r98gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:34:04 crc kubenswrapper[4982]: I0224 15:34:04.943870 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r98gg\" (UniqueName: \"kubernetes.io/projected/4189936c-4226-419e-b33c-ea0500c5cb45-kube-api-access-r98gg\") on node \"crc\" DevicePath \"\"" Feb 24 15:34:05 crc kubenswrapper[4982]: I0224 15:34:05.329624 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532454-28dfk" Feb 24 15:34:05 crc kubenswrapper[4982]: I0224 15:34:05.329685 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532454-28dfk" event={"ID":"4189936c-4226-419e-b33c-ea0500c5cb45","Type":"ContainerDied","Data":"3f0b7dc71c6dae45a0cf0f5a8159a7fb27ed31154638589f9bd610dd10c6a1b3"} Feb 24 15:34:05 crc kubenswrapper[4982]: I0224 15:34:05.330027 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f0b7dc71c6dae45a0cf0f5a8159a7fb27ed31154638589f9bd610dd10c6a1b3" Feb 24 15:34:05 crc kubenswrapper[4982]: I0224 15:34:05.777663 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532448-v4r99"] Feb 24 15:34:05 crc kubenswrapper[4982]: I0224 15:34:05.792393 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532448-v4r99"] Feb 24 15:34:07 crc kubenswrapper[4982]: I0224 15:34:07.158403 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f617b2-6e46-4b3f-9bdf-999597d27e87" path="/var/lib/kubelet/pods/d3f617b2-6e46-4b3f-9bdf-999597d27e87/volumes" Feb 24 15:34:10 crc kubenswrapper[4982]: I0224 15:34:10.147650 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:34:10 crc kubenswrapper[4982]: E0224 15:34:10.148238 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:34:25 crc kubenswrapper[4982]: I0224 15:34:25.146589 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:34:25 crc kubenswrapper[4982]: E0224 15:34:25.148395 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:34:40 crc kubenswrapper[4982]: I0224 15:34:40.145618 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:34:40 crc kubenswrapper[4982]: E0224 15:34:40.146858 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:34:48 crc kubenswrapper[4982]: I0224 15:34:48.115002 4982 scope.go:117] "RemoveContainer" containerID="e28d002b1d0b1a1ba5df2632631eca92fcda177e89ae678e522cf44c713403e9" Feb 24 15:34:51 crc kubenswrapper[4982]: I0224 15:34:51.145747 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:34:51 crc kubenswrapper[4982]: E0224 15:34:51.146397 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:35:04 crc kubenswrapper[4982]: I0224 15:35:04.145458 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:35:04 crc kubenswrapper[4982]: E0224 15:35:04.146290 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:35:15 crc kubenswrapper[4982]: I0224 15:35:15.146964 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:35:16 crc kubenswrapper[4982]: I0224 15:35:16.323165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"bf6256e0be4e6c2433d64e926d1b0b4f0e8eaa1115b4db80d6492b104ec63749"} Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.169938 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532456-kxhlm"] Feb 24 15:36:00 crc kubenswrapper[4982]: E0224 15:36:00.171434 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4189936c-4226-419e-b33c-ea0500c5cb45" containerName="oc" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.171460 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4189936c-4226-419e-b33c-ea0500c5cb45" containerName="oc" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.172029 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4189936c-4226-419e-b33c-ea0500c5cb45" containerName="oc" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.173392 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532456-kxhlm" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.175413 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.175650 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.176672 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.195673 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532456-kxhlm"] Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.290370 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98klw\" (UniqueName: \"kubernetes.io/projected/4790e210-1fed-44a0-8f36-ac6e40b342b6-kube-api-access-98klw\") pod \"auto-csr-approver-29532456-kxhlm\" (UID: \"4790e210-1fed-44a0-8f36-ac6e40b342b6\") " pod="openshift-infra/auto-csr-approver-29532456-kxhlm" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.392989 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98klw\" (UniqueName: \"kubernetes.io/projected/4790e210-1fed-44a0-8f36-ac6e40b342b6-kube-api-access-98klw\") pod \"auto-csr-approver-29532456-kxhlm\" (UID: \"4790e210-1fed-44a0-8f36-ac6e40b342b6\") " pod="openshift-infra/auto-csr-approver-29532456-kxhlm" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.422220 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98klw\" (UniqueName: \"kubernetes.io/projected/4790e210-1fed-44a0-8f36-ac6e40b342b6-kube-api-access-98klw\") pod \"auto-csr-approver-29532456-kxhlm\" (UID: \"4790e210-1fed-44a0-8f36-ac6e40b342b6\") " pod="openshift-infra/auto-csr-approver-29532456-kxhlm" Feb 24 15:36:00 crc kubenswrapper[4982]: I0224 15:36:00.504794 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532456-kxhlm" Feb 24 15:36:01 crc kubenswrapper[4982]: I0224 15:36:01.141154 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532456-kxhlm"] Feb 24 15:36:01 crc kubenswrapper[4982]: I0224 15:36:01.175017 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:36:01 crc kubenswrapper[4982]: I0224 15:36:01.941767 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532456-kxhlm" event={"ID":"4790e210-1fed-44a0-8f36-ac6e40b342b6","Type":"ContainerStarted","Data":"83609c8ee4fa3b35a0772dae39bbb9945ab3c457ea095baf861fecc285dd605b"} Feb 24 15:36:02 crc kubenswrapper[4982]: I0224 15:36:02.956379 4982 generic.go:334] "Generic (PLEG): container finished" podID="4790e210-1fed-44a0-8f36-ac6e40b342b6" containerID="4525a92bac1ef3e30a40592c371737913bf6804e470ee2859d88c53a17162733" exitCode=0 Feb 24 15:36:02 crc kubenswrapper[4982]: I0224 15:36:02.956453 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532456-kxhlm" event={"ID":"4790e210-1fed-44a0-8f36-ac6e40b342b6","Type":"ContainerDied","Data":"4525a92bac1ef3e30a40592c371737913bf6804e470ee2859d88c53a17162733"} Feb 24 15:36:04 crc kubenswrapper[4982]: I0224 15:36:04.482207 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532456-kxhlm" Feb 24 15:36:04 crc kubenswrapper[4982]: I0224 15:36:04.509447 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98klw\" (UniqueName: \"kubernetes.io/projected/4790e210-1fed-44a0-8f36-ac6e40b342b6-kube-api-access-98klw\") pod \"4790e210-1fed-44a0-8f36-ac6e40b342b6\" (UID: \"4790e210-1fed-44a0-8f36-ac6e40b342b6\") " Feb 24 15:36:04 crc kubenswrapper[4982]: I0224 15:36:04.518725 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4790e210-1fed-44a0-8f36-ac6e40b342b6-kube-api-access-98klw" (OuterVolumeSpecName: "kube-api-access-98klw") pod "4790e210-1fed-44a0-8f36-ac6e40b342b6" (UID: "4790e210-1fed-44a0-8f36-ac6e40b342b6"). InnerVolumeSpecName "kube-api-access-98klw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:36:04 crc kubenswrapper[4982]: I0224 15:36:04.612151 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98klw\" (UniqueName: \"kubernetes.io/projected/4790e210-1fed-44a0-8f36-ac6e40b342b6-kube-api-access-98klw\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:04 crc kubenswrapper[4982]: I0224 15:36:04.985357 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532456-kxhlm" event={"ID":"4790e210-1fed-44a0-8f36-ac6e40b342b6","Type":"ContainerDied","Data":"83609c8ee4fa3b35a0772dae39bbb9945ab3c457ea095baf861fecc285dd605b"} Feb 24 15:36:04 crc kubenswrapper[4982]: I0224 15:36:04.985418 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83609c8ee4fa3b35a0772dae39bbb9945ab3c457ea095baf861fecc285dd605b" Feb 24 15:36:04 crc kubenswrapper[4982]: I0224 15:36:04.985430 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532456-kxhlm" Feb 24 15:36:05 crc kubenswrapper[4982]: I0224 15:36:05.575973 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532450-4blkh"] Feb 24 15:36:05 crc kubenswrapper[4982]: I0224 15:36:05.587874 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532450-4blkh"] Feb 24 15:36:07 crc kubenswrapper[4982]: I0224 15:36:07.167386 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5293e6-7402-4181-89fc-27675b8d43d8" path="/var/lib/kubelet/pods/bc5293e6-7402-4181-89fc-27675b8d43d8/volumes" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.617187 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9tmcs"] Feb 24 15:36:35 crc kubenswrapper[4982]: E0224 15:36:35.618685 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4790e210-1fed-44a0-8f36-ac6e40b342b6" containerName="oc" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.618707 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4790e210-1fed-44a0-8f36-ac6e40b342b6" containerName="oc" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.619162 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4790e210-1fed-44a0-8f36-ac6e40b342b6" containerName="oc" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.632229 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9tmcs"] Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.632455 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.816231 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-catalog-content\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.816561 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-utilities\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.816602 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgx7\" (UniqueName: \"kubernetes.io/projected/a25bb436-c214-46ac-8371-81d626928cdb-kube-api-access-2sgx7\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.918199 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-catalog-content\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.918272 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-utilities\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.918311 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgx7\" (UniqueName: \"kubernetes.io/projected/a25bb436-c214-46ac-8371-81d626928cdb-kube-api-access-2sgx7\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.918722 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-catalog-content\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.918798 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-utilities\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.939624 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgx7\" (UniqueName: \"kubernetes.io/projected/a25bb436-c214-46ac-8371-81d626928cdb-kube-api-access-2sgx7\") pod \"community-operators-9tmcs\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:35 crc kubenswrapper[4982]: I0224 15:36:35.990838 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:36 crc kubenswrapper[4982]: I0224 15:36:36.677483 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9tmcs"] Feb 24 15:36:37 crc kubenswrapper[4982]: I0224 15:36:37.393289 4982 generic.go:334] "Generic (PLEG): container finished" podID="a25bb436-c214-46ac-8371-81d626928cdb" containerID="e34da0df370f10202a851457ca640f85085b0e841e05d85fc33e01086493c1e0" exitCode=0 Feb 24 15:36:37 crc kubenswrapper[4982]: I0224 15:36:37.393346 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tmcs" event={"ID":"a25bb436-c214-46ac-8371-81d626928cdb","Type":"ContainerDied","Data":"e34da0df370f10202a851457ca640f85085b0e841e05d85fc33e01086493c1e0"} Feb 24 15:36:37 crc kubenswrapper[4982]: I0224 15:36:37.393710 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tmcs" event={"ID":"a25bb436-c214-46ac-8371-81d626928cdb","Type":"ContainerStarted","Data":"32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513"} Feb 24 15:36:38 crc kubenswrapper[4982]: I0224 15:36:38.408372 4982 generic.go:334] "Generic (PLEG): container finished" podID="de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" containerID="e895940c0edc307dd9fd68106c658c6166e2a980ac8f097fc082736e41efac9d" exitCode=0 Feb 24 15:36:38 crc kubenswrapper[4982]: I0224 15:36:38.408546 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" event={"ID":"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7","Type":"ContainerDied","Data":"e895940c0edc307dd9fd68106c658c6166e2a980ac8f097fc082736e41efac9d"} Feb 24 15:36:38 crc kubenswrapper[4982]: I0224 15:36:38.411274 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tmcs" event={"ID":"a25bb436-c214-46ac-8371-81d626928cdb","Type":"ContainerStarted","Data":"f1a05337c2e8999cc45121c83144a9cd95a49f5f9da3010ffbb6b21550cb5fb1"} Feb 24 15:36:39 crc kubenswrapper[4982]: I0224 15:36:39.968758 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.080974 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-1\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081057 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-3\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081121 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-combined-ca-bundle\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081154 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-2\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081232 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-0\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081387 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-0\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081413 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-1\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081462 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz59s\" (UniqueName: \"kubernetes.io/projected/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-kube-api-access-bz59s\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081487 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-extra-config-0\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081531 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-inventory\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.081591 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-ssh-key-openstack-edpm-ipam\") pod \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\" (UID: \"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7\") " Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.088053 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-kube-api-access-bz59s" (OuterVolumeSpecName: "kube-api-access-bz59s") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "kube-api-access-bz59s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.104905 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.127477 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.128442 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.129837 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.133076 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.134476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.136064 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-inventory" (OuterVolumeSpecName: "inventory") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.142085 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.152282 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.165707 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" (UID: "de5e4e7c-74db-408c-bad2-18f9d3bdfeb7"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184669 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184726 4982 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184741 4982 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184754 4982 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184766 4982 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184779 4982 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184790 4982 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184801 4982 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184814 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz59s\" (UniqueName: \"kubernetes.io/projected/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-kube-api-access-bz59s\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184824 4982 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.184836 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5e4e7c-74db-408c-bad2-18f9d3bdfeb7-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.435103 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" event={"ID":"de5e4e7c-74db-408c-bad2-18f9d3bdfeb7","Type":"ContainerDied","Data":"8be43e962d688ee2e449aee51f026181219c0c690a1ebfe0b394d3c4009ecfd4"} Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.435388 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be43e962d688ee2e449aee51f026181219c0c690a1ebfe0b394d3c4009ecfd4" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.435133 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qjzqt" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.437323 4982 generic.go:334] "Generic (PLEG): container finished" podID="a25bb436-c214-46ac-8371-81d626928cdb" containerID="f1a05337c2e8999cc45121c83144a9cd95a49f5f9da3010ffbb6b21550cb5fb1" exitCode=0 Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.437349 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tmcs" event={"ID":"a25bb436-c214-46ac-8371-81d626928cdb","Type":"ContainerDied","Data":"f1a05337c2e8999cc45121c83144a9cd95a49f5f9da3010ffbb6b21550cb5fb1"} Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.529266 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh"] Feb 24 15:36:40 crc kubenswrapper[4982]: E0224 15:36:40.529960 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.529986 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.530362 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="de5e4e7c-74db-408c-bad2-18f9d3bdfeb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.531419 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.533290 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.533477 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.533631 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.537589 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.537604 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.538787 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh"] Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.695384 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.695488 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.695845 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.695951 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.695994 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhc4\" (UniqueName: \"kubernetes.io/projected/d376d1ae-0d9e-457a-97c3-dce655164119-kube-api-access-lvhc4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.696153 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.696290 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.798618 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.798709 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.798955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.799048 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.799089 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhc4\" (UniqueName: \"kubernetes.io/projected/d376d1ae-0d9e-457a-97c3-dce655164119-kube-api-access-lvhc4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.799222 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.799316 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.803989 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.804163 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.804165 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.804668 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.806229 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.806426 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.814546 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhc4\" (UniqueName: \"kubernetes.io/projected/d376d1ae-0d9e-457a-97c3-dce655164119-kube-api-access-lvhc4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:40 crc kubenswrapper[4982]: I0224 15:36:40.862292 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:36:41 crc kubenswrapper[4982]: I0224 15:36:41.425429 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh"] Feb 24 15:36:41 crc kubenswrapper[4982]: I0224 15:36:41.448064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tmcs" event={"ID":"a25bb436-c214-46ac-8371-81d626928cdb","Type":"ContainerStarted","Data":"4db6e411932d3998e55c8b525e393b60abdfe144e9696410ba6e5560ad9b5572"} Feb 24 15:36:41 crc kubenswrapper[4982]: I0224 15:36:41.450104 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" event={"ID":"d376d1ae-0d9e-457a-97c3-dce655164119","Type":"ContainerStarted","Data":"6263991a225e59975fdf00f1e94d2a965fbf3c6317adca0fd1a9279621668da5"} Feb 24 15:36:41 crc kubenswrapper[4982]: I0224 15:36:41.464251 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9tmcs" podStartSLOduration=3.030384895 podStartE2EDuration="6.46423231s" podCreationTimestamp="2026-02-24 15:36:35 +0000 UTC" firstStartedPulling="2026-02-24 15:36:37.395314658 +0000 UTC m=+2859.014373171" lastFinishedPulling="2026-02-24 15:36:40.829162083 +0000 UTC m=+2862.448220586" observedRunningTime="2026-02-24 15:36:41.464152788 +0000 UTC m=+2863.083211301" watchObservedRunningTime="2026-02-24 15:36:41.46423231 +0000 UTC m=+2863.083290803" Feb 24 15:36:42 crc kubenswrapper[4982]: I0224 15:36:42.464154 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" event={"ID":"d376d1ae-0d9e-457a-97c3-dce655164119","Type":"ContainerStarted","Data":"de153ab90536a0fcba8e642fe1fdd840b89965ebcf3533618f19eb4ede84bdec"} Feb 24 15:36:42 crc kubenswrapper[4982]: I0224 15:36:42.493016 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" podStartSLOduration=1.981426183 podStartE2EDuration="2.492998994s" podCreationTimestamp="2026-02-24 15:36:40 +0000 UTC" firstStartedPulling="2026-02-24 15:36:41.430110352 +0000 UTC m=+2863.049168845" lastFinishedPulling="2026-02-24 15:36:41.941683123 +0000 UTC m=+2863.560741656" observedRunningTime="2026-02-24 15:36:42.490040003 +0000 UTC m=+2864.109098506" watchObservedRunningTime="2026-02-24 15:36:42.492998994 +0000 UTC m=+2864.112057487" Feb 24 15:36:45 crc kubenswrapper[4982]: I0224 15:36:45.991086 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:45 crc kubenswrapper[4982]: I0224 15:36:45.992671 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:47 crc kubenswrapper[4982]: I0224 15:36:47.081234 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9tmcs" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="registry-server" probeResult="failure" output=< Feb 24 15:36:47 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:36:47 crc kubenswrapper[4982]: > Feb 24 15:36:48 crc kubenswrapper[4982]: I0224 15:36:48.275429 4982 scope.go:117] "RemoveContainer" containerID="5fafeb85b407f194c277719ce9d55e44aba017076066a63df79e27ad6f327637" Feb 24 15:36:56 crc kubenswrapper[4982]: I0224 15:36:56.089719 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:56 crc kubenswrapper[4982]: I0224 15:36:56.182230 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:56 crc kubenswrapper[4982]: I0224 15:36:56.345948 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9tmcs"] Feb 24 15:36:57 crc kubenswrapper[4982]: I0224 15:36:57.661941 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9tmcs" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="registry-server" containerID="cri-o://4db6e411932d3998e55c8b525e393b60abdfe144e9696410ba6e5560ad9b5572" gracePeriod=2 Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.677444 4982 generic.go:334] "Generic (PLEG): container finished" podID="a25bb436-c214-46ac-8371-81d626928cdb" containerID="4db6e411932d3998e55c8b525e393b60abdfe144e9696410ba6e5560ad9b5572" exitCode=0 Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.677537 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tmcs" event={"ID":"a25bb436-c214-46ac-8371-81d626928cdb","Type":"ContainerDied","Data":"4db6e411932d3998e55c8b525e393b60abdfe144e9696410ba6e5560ad9b5572"} Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.829923 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.854033 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-catalog-content\") pod \"a25bb436-c214-46ac-8371-81d626928cdb\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.854564 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-utilities\") pod \"a25bb436-c214-46ac-8371-81d626928cdb\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.854688 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sgx7\" (UniqueName: \"kubernetes.io/projected/a25bb436-c214-46ac-8371-81d626928cdb-kube-api-access-2sgx7\") pod \"a25bb436-c214-46ac-8371-81d626928cdb\" (UID: \"a25bb436-c214-46ac-8371-81d626928cdb\") " Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.857528 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-utilities" (OuterVolumeSpecName: "utilities") pod "a25bb436-c214-46ac-8371-81d626928cdb" (UID: "a25bb436-c214-46ac-8371-81d626928cdb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.879377 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25bb436-c214-46ac-8371-81d626928cdb-kube-api-access-2sgx7" (OuterVolumeSpecName: "kube-api-access-2sgx7") pod "a25bb436-c214-46ac-8371-81d626928cdb" (UID: "a25bb436-c214-46ac-8371-81d626928cdb"). InnerVolumeSpecName "kube-api-access-2sgx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.934613 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25bb436-c214-46ac-8371-81d626928cdb" (UID: "a25bb436-c214-46ac-8371-81d626928cdb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.957772 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.957798 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25bb436-c214-46ac-8371-81d626928cdb-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:58 crc kubenswrapper[4982]: I0224 15:36:58.957808 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sgx7\" (UniqueName: \"kubernetes.io/projected/a25bb436-c214-46ac-8371-81d626928cdb-kube-api-access-2sgx7\") on node \"crc\" DevicePath \"\"" Feb 24 15:36:59 crc kubenswrapper[4982]: I0224 15:36:59.691217 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9tmcs" event={"ID":"a25bb436-c214-46ac-8371-81d626928cdb","Type":"ContainerDied","Data":"32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513"} Feb 24 15:36:59 crc kubenswrapper[4982]: I0224 15:36:59.691277 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9tmcs" Feb 24 15:36:59 crc kubenswrapper[4982]: I0224 15:36:59.691297 4982 scope.go:117] "RemoveContainer" containerID="4db6e411932d3998e55c8b525e393b60abdfe144e9696410ba6e5560ad9b5572" Feb 24 15:36:59 crc kubenswrapper[4982]: I0224 15:36:59.722265 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9tmcs"] Feb 24 15:36:59 crc kubenswrapper[4982]: I0224 15:36:59.731920 4982 scope.go:117] "RemoveContainer" containerID="f1a05337c2e8999cc45121c83144a9cd95a49f5f9da3010ffbb6b21550cb5fb1" Feb 24 15:36:59 crc kubenswrapper[4982]: I0224 15:36:59.741488 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9tmcs"] Feb 24 15:36:59 crc kubenswrapper[4982]: I0224 15:36:59.768576 4982 scope.go:117] "RemoveContainer" containerID="e34da0df370f10202a851457ca640f85085b0e841e05d85fc33e01086493c1e0" Feb 24 15:37:01 crc kubenswrapper[4982]: I0224 15:37:01.163218 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25bb436-c214-46ac-8371-81d626928cdb" path="/var/lib/kubelet/pods/a25bb436-c214-46ac-8371-81d626928cdb/volumes" Feb 24 15:37:04 crc kubenswrapper[4982]: E0224 15:37:04.103911 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:11 crc kubenswrapper[4982]: E0224 15:37:11.295911 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:14 crc kubenswrapper[4982]: E0224 15:37:14.158729 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:24 crc kubenswrapper[4982]: E0224 15:37:24.564789 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:26 crc kubenswrapper[4982]: E0224 15:37:26.083694 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:34 crc kubenswrapper[4982]: E0224 15:37:34.869406 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:38 crc kubenswrapper[4982]: I0224 15:37:38.738598 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:37:38 crc kubenswrapper[4982]: I0224 15:37:38.739303 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:37:41 crc kubenswrapper[4982]: E0224 15:37:41.399872 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:44 crc kubenswrapper[4982]: E0224 15:37:44.964046 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:48 crc kubenswrapper[4982]: E0224 15:37:48.306150 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:48 crc kubenswrapper[4982]: E0224 15:37:48.306434 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:55 crc kubenswrapper[4982]: E0224 15:37:55.344435 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:56 crc kubenswrapper[4982]: E0224 15:37:56.084713 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25bb436_c214_46ac_8371_81d626928cdb.slice/crio-32df9a6a90a3fe2294f15a206d6b3420162a72991bb9694bee087491bd284513\": RecentStats: unable to find data in memory cache]" Feb 24 15:37:59 crc kubenswrapper[4982]: E0224 15:37:59.213389 4982 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5157a96faa4aadbc35306e21f2628e83237f42795305388f464da3237bc630a7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5157a96faa4aadbc35306e21f2628e83237f42795305388f464da3237bc630a7/diff: no such file or directory, extraDiskErr: Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.184528 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532458-jfvd9"] Feb 24 15:38:00 crc kubenswrapper[4982]: E0224 15:38:00.185311 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="registry-server" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.185330 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="registry-server" Feb 24 15:38:00 crc kubenswrapper[4982]: E0224 15:38:00.185341 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="extract-content" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.185348 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="extract-content" Feb 24 15:38:00 crc kubenswrapper[4982]: E0224 15:38:00.185395 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="extract-utilities" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.185402 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="extract-utilities" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.185649 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25bb436-c214-46ac-8371-81d626928cdb" containerName="registry-server" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.186455 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.190852 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.190863 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.191084 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.207299 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532458-jfvd9"] Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.267346 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lk6g\" (UniqueName: \"kubernetes.io/projected/e5bf17f1-b2a2-4341-92fc-7711a71e8766-kube-api-access-4lk6g\") pod \"auto-csr-approver-29532458-jfvd9\" (UID: \"e5bf17f1-b2a2-4341-92fc-7711a71e8766\") " pod="openshift-infra/auto-csr-approver-29532458-jfvd9" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.369802 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lk6g\" (UniqueName: \"kubernetes.io/projected/e5bf17f1-b2a2-4341-92fc-7711a71e8766-kube-api-access-4lk6g\") pod \"auto-csr-approver-29532458-jfvd9\" (UID: \"e5bf17f1-b2a2-4341-92fc-7711a71e8766\") " pod="openshift-infra/auto-csr-approver-29532458-jfvd9" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.387906 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lk6g\" (UniqueName: \"kubernetes.io/projected/e5bf17f1-b2a2-4341-92fc-7711a71e8766-kube-api-access-4lk6g\") pod \"auto-csr-approver-29532458-jfvd9\" (UID: \"e5bf17f1-b2a2-4341-92fc-7711a71e8766\") " pod="openshift-infra/auto-csr-approver-29532458-jfvd9" Feb 24 15:38:00 crc kubenswrapper[4982]: I0224 15:38:00.512443 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" Feb 24 15:38:01 crc kubenswrapper[4982]: I0224 15:38:01.048384 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532458-jfvd9"] Feb 24 15:38:01 crc kubenswrapper[4982]: W0224 15:38:01.054361 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5bf17f1_b2a2_4341_92fc_7711a71e8766.slice/crio-7a27f58e77c4a1534e86bbe1fa084d887804f969c5f2bd1259a6d6b3f9f39583 WatchSource:0}: Error finding container 7a27f58e77c4a1534e86bbe1fa084d887804f969c5f2bd1259a6d6b3f9f39583: Status 404 returned error can't find the container with id 7a27f58e77c4a1534e86bbe1fa084d887804f969c5f2bd1259a6d6b3f9f39583 Feb 24 15:38:01 crc kubenswrapper[4982]: I0224 15:38:01.593681 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" event={"ID":"e5bf17f1-b2a2-4341-92fc-7711a71e8766","Type":"ContainerStarted","Data":"7a27f58e77c4a1534e86bbe1fa084d887804f969c5f2bd1259a6d6b3f9f39583"} Feb 24 15:38:02 crc kubenswrapper[4982]: I0224 15:38:02.606625 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" event={"ID":"e5bf17f1-b2a2-4341-92fc-7711a71e8766","Type":"ContainerStarted","Data":"4ef029c85945ddaca6c1362b1c2f3e3e8cbd2ae89c61ffaeb5e6a06c2fe51701"} Feb 24 15:38:02 crc kubenswrapper[4982]: I0224 15:38:02.623083 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" podStartSLOduration=1.629945519 podStartE2EDuration="2.623063541s" podCreationTimestamp="2026-02-24 15:38:00 +0000 UTC" firstStartedPulling="2026-02-24 15:38:01.058297245 +0000 UTC m=+2942.677355748" lastFinishedPulling="2026-02-24 15:38:02.051415267 +0000 UTC m=+2943.670473770" observedRunningTime="2026-02-24 15:38:02.619907685 +0000 UTC m=+2944.238966178" watchObservedRunningTime="2026-02-24 15:38:02.623063541 +0000 UTC m=+2944.242122034" Feb 24 15:38:03 crc kubenswrapper[4982]: I0224 15:38:03.635646 4982 generic.go:334] "Generic (PLEG): container finished" podID="e5bf17f1-b2a2-4341-92fc-7711a71e8766" containerID="4ef029c85945ddaca6c1362b1c2f3e3e8cbd2ae89c61ffaeb5e6a06c2fe51701" exitCode=0 Feb 24 15:38:03 crc kubenswrapper[4982]: I0224 15:38:03.636016 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" event={"ID":"e5bf17f1-b2a2-4341-92fc-7711a71e8766","Type":"ContainerDied","Data":"4ef029c85945ddaca6c1362b1c2f3e3e8cbd2ae89c61ffaeb5e6a06c2fe51701"} Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.104547 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.200625 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lk6g\" (UniqueName: \"kubernetes.io/projected/e5bf17f1-b2a2-4341-92fc-7711a71e8766-kube-api-access-4lk6g\") pod \"e5bf17f1-b2a2-4341-92fc-7711a71e8766\" (UID: \"e5bf17f1-b2a2-4341-92fc-7711a71e8766\") " Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.207148 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5bf17f1-b2a2-4341-92fc-7711a71e8766-kube-api-access-4lk6g" (OuterVolumeSpecName: "kube-api-access-4lk6g") pod "e5bf17f1-b2a2-4341-92fc-7711a71e8766" (UID: "e5bf17f1-b2a2-4341-92fc-7711a71e8766"). InnerVolumeSpecName "kube-api-access-4lk6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.305915 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lk6g\" (UniqueName: \"kubernetes.io/projected/e5bf17f1-b2a2-4341-92fc-7711a71e8766-kube-api-access-4lk6g\") on node \"crc\" DevicePath \"\"" Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.672840 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" event={"ID":"e5bf17f1-b2a2-4341-92fc-7711a71e8766","Type":"ContainerDied","Data":"7a27f58e77c4a1534e86bbe1fa084d887804f969c5f2bd1259a6d6b3f9f39583"} Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.673254 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a27f58e77c4a1534e86bbe1fa084d887804f969c5f2bd1259a6d6b3f9f39583" Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.672926 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532458-jfvd9" Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.732736 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532452-4j6xr"] Feb 24 15:38:05 crc kubenswrapper[4982]: I0224 15:38:05.745097 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532452-4j6xr"] Feb 24 15:38:07 crc kubenswrapper[4982]: I0224 15:38:07.167484 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f60780-22b1-481b-aa05-c11f94ce37bb" path="/var/lib/kubelet/pods/84f60780-22b1-481b-aa05-c11f94ce37bb/volumes" Feb 24 15:38:08 crc kubenswrapper[4982]: I0224 15:38:08.738814 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:38:08 crc kubenswrapper[4982]: I0224 15:38:08.740667 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:38:38 crc kubenswrapper[4982]: I0224 15:38:38.737729 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:38:38 crc kubenswrapper[4982]: I0224 15:38:38.738323 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:38:38 crc kubenswrapper[4982]: I0224 15:38:38.738380 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:38:38 crc kubenswrapper[4982]: I0224 15:38:38.739439 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf6256e0be4e6c2433d64e926d1b0b4f0e8eaa1115b4db80d6492b104ec63749"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:38:38 crc kubenswrapper[4982]: I0224 15:38:38.739532 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://bf6256e0be4e6c2433d64e926d1b0b4f0e8eaa1115b4db80d6492b104ec63749" gracePeriod=600 Feb 24 15:38:39 crc kubenswrapper[4982]: I0224 15:38:39.117582 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="bf6256e0be4e6c2433d64e926d1b0b4f0e8eaa1115b4db80d6492b104ec63749" exitCode=0 Feb 24 15:38:39 crc kubenswrapper[4982]: I0224 15:38:39.117901 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"bf6256e0be4e6c2433d64e926d1b0b4f0e8eaa1115b4db80d6492b104ec63749"} Feb 24 15:38:39 crc kubenswrapper[4982]: I0224 15:38:39.117944 4982 scope.go:117] "RemoveContainer" containerID="e8e767a0a9a079d3f7917fe338c2543f11be201c27927c930cea38d1fceb7d6c" Feb 24 15:38:40 crc kubenswrapper[4982]: I0224 15:38:40.134626 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9"} Feb 24 15:38:48 crc kubenswrapper[4982]: I0224 15:38:48.438166 4982 scope.go:117] "RemoveContainer" containerID="bda0dd437990578da7d8486043df11316b410327db23e220ceabf226e5c2d9f0" Feb 24 15:39:20 crc kubenswrapper[4982]: I0224 15:39:20.713290 4982 generic.go:334] "Generic (PLEG): container finished" podID="d376d1ae-0d9e-457a-97c3-dce655164119" containerID="de153ab90536a0fcba8e642fe1fdd840b89965ebcf3533618f19eb4ede84bdec" exitCode=0 Feb 24 15:39:20 crc kubenswrapper[4982]: I0224 15:39:20.713378 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" event={"ID":"d376d1ae-0d9e-457a-97c3-dce655164119","Type":"ContainerDied","Data":"de153ab90536a0fcba8e642fe1fdd840b89965ebcf3533618f19eb4ede84bdec"} Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.295904 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.473354 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvhc4\" (UniqueName: \"kubernetes.io/projected/d376d1ae-0d9e-457a-97c3-dce655164119-kube-api-access-lvhc4\") pod \"d376d1ae-0d9e-457a-97c3-dce655164119\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.473852 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-telemetry-combined-ca-bundle\") pod \"d376d1ae-0d9e-457a-97c3-dce655164119\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.473928 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-inventory\") pod \"d376d1ae-0d9e-457a-97c3-dce655164119\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.473965 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-2\") pod \"d376d1ae-0d9e-457a-97c3-dce655164119\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.474022 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-0\") pod \"d376d1ae-0d9e-457a-97c3-dce655164119\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.474098 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ssh-key-openstack-edpm-ipam\") pod \"d376d1ae-0d9e-457a-97c3-dce655164119\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.474150 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-1\") pod \"d376d1ae-0d9e-457a-97c3-dce655164119\" (UID: \"d376d1ae-0d9e-457a-97c3-dce655164119\") " Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.502893 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d376d1ae-0d9e-457a-97c3-dce655164119-kube-api-access-lvhc4" (OuterVolumeSpecName: "kube-api-access-lvhc4") pod "d376d1ae-0d9e-457a-97c3-dce655164119" (UID: "d376d1ae-0d9e-457a-97c3-dce655164119"). InnerVolumeSpecName "kube-api-access-lvhc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.517496 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d376d1ae-0d9e-457a-97c3-dce655164119" (UID: "d376d1ae-0d9e-457a-97c3-dce655164119"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.522627 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "d376d1ae-0d9e-457a-97c3-dce655164119" (UID: "d376d1ae-0d9e-457a-97c3-dce655164119"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.541641 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "d376d1ae-0d9e-457a-97c3-dce655164119" (UID: "d376d1ae-0d9e-457a-97c3-dce655164119"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.563648 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "d376d1ae-0d9e-457a-97c3-dce655164119" (UID: "d376d1ae-0d9e-457a-97c3-dce655164119"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.576633 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvhc4\" (UniqueName: \"kubernetes.io/projected/d376d1ae-0d9e-457a-97c3-dce655164119-kube-api-access-lvhc4\") on node \"crc\" DevicePath \"\"" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.576661 4982 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.576669 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.576679 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.576687 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.576912 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-inventory" (OuterVolumeSpecName: "inventory") pod "d376d1ae-0d9e-457a-97c3-dce655164119" (UID: "d376d1ae-0d9e-457a-97c3-dce655164119"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.588670 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d376d1ae-0d9e-457a-97c3-dce655164119" (UID: "d376d1ae-0d9e-457a-97c3-dce655164119"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.678788 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.678820 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d376d1ae-0d9e-457a-97c3-dce655164119-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.797940 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" event={"ID":"d376d1ae-0d9e-457a-97c3-dce655164119","Type":"ContainerDied","Data":"6263991a225e59975fdf00f1e94d2a965fbf3c6317adca0fd1a9279621668da5"} Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.797980 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6263991a225e59975fdf00f1e94d2a965fbf3c6317adca0fd1a9279621668da5" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.798039 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.896695 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6"] Feb 24 15:39:22 crc kubenswrapper[4982]: E0224 15:39:22.897259 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d376d1ae-0d9e-457a-97c3-dce655164119" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.897276 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d376d1ae-0d9e-457a-97c3-dce655164119" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 24 15:39:22 crc kubenswrapper[4982]: E0224 15:39:22.897295 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5bf17f1-b2a2-4341-92fc-7711a71e8766" containerName="oc" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.897301 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bf17f1-b2a2-4341-92fc-7711a71e8766" containerName="oc" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.897533 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5bf17f1-b2a2-4341-92fc-7711a71e8766" containerName="oc" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.897549 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d376d1ae-0d9e-457a-97c3-dce655164119" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.898396 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.933712 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.933948 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.934163 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.934379 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.934533 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.940666 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6"] Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.986163 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.986239 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.986280 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.986324 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.986386 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.986441 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:22 crc kubenswrapper[4982]: I0224 15:39:22.986558 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwq5f\" (UniqueName: \"kubernetes.io/projected/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-kube-api-access-zwq5f\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.088848 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.089537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.089593 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.089692 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwq5f\" (UniqueName: \"kubernetes.io/projected/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-kube-api-access-zwq5f\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.090023 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.090073 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.090108 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.095460 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.096079 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.096528 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.096965 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.098950 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.099220 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.109637 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwq5f\" (UniqueName: \"kubernetes.io/projected/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-kube-api-access-zwq5f\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.252287 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:39:23 crc kubenswrapper[4982]: I0224 15:39:23.865371 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6"] Feb 24 15:39:23 crc kubenswrapper[4982]: W0224 15:39:23.873960 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb3d6bb3_611c_4c2a_bb6b_7a3dca94b9fd.slice/crio-6dc1961b2094702db96476384a678608f11858195337192bc634312dd2cfa1bf WatchSource:0}: Error finding container 6dc1961b2094702db96476384a678608f11858195337192bc634312dd2cfa1bf: Status 404 returned error can't find the container with id 6dc1961b2094702db96476384a678608f11858195337192bc634312dd2cfa1bf Feb 24 15:39:24 crc kubenswrapper[4982]: I0224 15:39:24.826427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" event={"ID":"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd","Type":"ContainerStarted","Data":"a58ca48b7182814089f35e29b28724dc991f728077151e48a4ac52a93c37914a"} Feb 24 15:39:24 crc kubenswrapper[4982]: I0224 15:39:24.827021 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" event={"ID":"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd","Type":"ContainerStarted","Data":"6dc1961b2094702db96476384a678608f11858195337192bc634312dd2cfa1bf"} Feb 24 15:39:24 crc kubenswrapper[4982]: I0224 15:39:24.863479 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" podStartSLOduration=2.374943545 podStartE2EDuration="2.863457041s" podCreationTimestamp="2026-02-24 15:39:22 +0000 UTC" firstStartedPulling="2026-02-24 15:39:23.876867678 +0000 UTC m=+3025.495926171" lastFinishedPulling="2026-02-24 15:39:24.365381174 +0000 UTC m=+3025.984439667" observedRunningTime="2026-02-24 15:39:24.851918536 +0000 UTC m=+3026.470977039" watchObservedRunningTime="2026-02-24 15:39:24.863457041 +0000 UTC m=+3026.482515534" Feb 24 15:39:57 crc kubenswrapper[4982]: I0224 15:39:57.959790 4982 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8b9r9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 15:39:57 crc kubenswrapper[4982]: I0224 15:39:57.960308 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8b9r9" podUID="22ced552-66a9-4936-8c25-3e3e8734de79" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.204604 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532460-f8rxs"] Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.208666 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532460-f8rxs" Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.213248 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.213262 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.213437 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.233896 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532460-f8rxs"] Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.260206 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c778n\" (UniqueName: \"kubernetes.io/projected/a7c7c372-a351-4958-93a9-40356bdfe6a4-kube-api-access-c778n\") pod \"auto-csr-approver-29532460-f8rxs\" (UID: \"a7c7c372-a351-4958-93a9-40356bdfe6a4\") " pod="openshift-infra/auto-csr-approver-29532460-f8rxs" Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.363150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c778n\" (UniqueName: \"kubernetes.io/projected/a7c7c372-a351-4958-93a9-40356bdfe6a4-kube-api-access-c778n\") pod \"auto-csr-approver-29532460-f8rxs\" (UID: \"a7c7c372-a351-4958-93a9-40356bdfe6a4\") " pod="openshift-infra/auto-csr-approver-29532460-f8rxs" Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.386740 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c778n\" (UniqueName: \"kubernetes.io/projected/a7c7c372-a351-4958-93a9-40356bdfe6a4-kube-api-access-c778n\") pod \"auto-csr-approver-29532460-f8rxs\" (UID: \"a7c7c372-a351-4958-93a9-40356bdfe6a4\") " pod="openshift-infra/auto-csr-approver-29532460-f8rxs" Feb 24 15:40:00 crc kubenswrapper[4982]: I0224 15:40:00.532404 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532460-f8rxs" Feb 24 15:40:01 crc kubenswrapper[4982]: I0224 15:40:01.101974 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532460-f8rxs"] Feb 24 15:40:01 crc kubenswrapper[4982]: W0224 15:40:01.107083 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7c7c372_a351_4958_93a9_40356bdfe6a4.slice/crio-dd9c9c2b6d7337c3e3231ba6d90b70e1f5ef2b316b9377b9e8dd2ce822671368 WatchSource:0}: Error finding container dd9c9c2b6d7337c3e3231ba6d90b70e1f5ef2b316b9377b9e8dd2ce822671368: Status 404 returned error can't find the container with id dd9c9c2b6d7337c3e3231ba6d90b70e1f5ef2b316b9377b9e8dd2ce822671368 Feb 24 15:40:02 crc kubenswrapper[4982]: I0224 15:40:02.076418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532460-f8rxs" event={"ID":"a7c7c372-a351-4958-93a9-40356bdfe6a4","Type":"ContainerStarted","Data":"dd9c9c2b6d7337c3e3231ba6d90b70e1f5ef2b316b9377b9e8dd2ce822671368"} Feb 24 15:40:03 crc kubenswrapper[4982]: I0224 15:40:03.109736 4982 generic.go:334] "Generic (PLEG): container finished" podID="a7c7c372-a351-4958-93a9-40356bdfe6a4" containerID="c2f28db5f69ae4e1a74997cb27882d2a803358573d534a5561b27bc026e91a2c" exitCode=0 Feb 24 15:40:03 crc kubenswrapper[4982]: I0224 15:40:03.109825 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532460-f8rxs" event={"ID":"a7c7c372-a351-4958-93a9-40356bdfe6a4","Type":"ContainerDied","Data":"c2f28db5f69ae4e1a74997cb27882d2a803358573d534a5561b27bc026e91a2c"} Feb 24 15:40:04 crc kubenswrapper[4982]: I0224 15:40:04.501636 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532460-f8rxs" Feb 24 15:40:04 crc kubenswrapper[4982]: I0224 15:40:04.572394 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c778n\" (UniqueName: \"kubernetes.io/projected/a7c7c372-a351-4958-93a9-40356bdfe6a4-kube-api-access-c778n\") pod \"a7c7c372-a351-4958-93a9-40356bdfe6a4\" (UID: \"a7c7c372-a351-4958-93a9-40356bdfe6a4\") " Feb 24 15:40:04 crc kubenswrapper[4982]: I0224 15:40:04.578178 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c7c372-a351-4958-93a9-40356bdfe6a4-kube-api-access-c778n" (OuterVolumeSpecName: "kube-api-access-c778n") pod "a7c7c372-a351-4958-93a9-40356bdfe6a4" (UID: "a7c7c372-a351-4958-93a9-40356bdfe6a4"). InnerVolumeSpecName "kube-api-access-c778n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:40:04 crc kubenswrapper[4982]: I0224 15:40:04.675908 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c778n\" (UniqueName: \"kubernetes.io/projected/a7c7c372-a351-4958-93a9-40356bdfe6a4-kube-api-access-c778n\") on node \"crc\" DevicePath \"\"" Feb 24 15:40:05 crc kubenswrapper[4982]: I0224 15:40:05.137254 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532460-f8rxs" event={"ID":"a7c7c372-a351-4958-93a9-40356bdfe6a4","Type":"ContainerDied","Data":"dd9c9c2b6d7337c3e3231ba6d90b70e1f5ef2b316b9377b9e8dd2ce822671368"} Feb 24 15:40:05 crc kubenswrapper[4982]: I0224 15:40:05.137314 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd9c9c2b6d7337c3e3231ba6d90b70e1f5ef2b316b9377b9e8dd2ce822671368" Feb 24 15:40:05 crc kubenswrapper[4982]: I0224 15:40:05.137391 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532460-f8rxs" Feb 24 15:40:05 crc kubenswrapper[4982]: I0224 15:40:05.620706 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532454-28dfk"] Feb 24 15:40:05 crc kubenswrapper[4982]: I0224 15:40:05.632656 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532454-28dfk"] Feb 24 15:40:07 crc kubenswrapper[4982]: I0224 15:40:07.171386 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4189936c-4226-419e-b33c-ea0500c5cb45" path="/var/lib/kubelet/pods/4189936c-4226-419e-b33c-ea0500c5cb45/volumes" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.645853 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hqxjj"] Feb 24 15:40:36 crc kubenswrapper[4982]: E0224 15:40:36.647667 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c7c372-a351-4958-93a9-40356bdfe6a4" containerName="oc" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.647696 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c7c372-a351-4958-93a9-40356bdfe6a4" containerName="oc" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.648241 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c7c372-a351-4958-93a9-40356bdfe6a4" containerName="oc" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.652036 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.657810 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqxjj"] Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.759148 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-utilities\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.759219 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-catalog-content\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.759797 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv7sm\" (UniqueName: \"kubernetes.io/projected/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-kube-api-access-vv7sm\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.862178 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv7sm\" (UniqueName: \"kubernetes.io/projected/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-kube-api-access-vv7sm\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.862361 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-utilities\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.862435 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-catalog-content\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.863038 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-utilities\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.863100 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-catalog-content\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.895577 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv7sm\" (UniqueName: \"kubernetes.io/projected/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-kube-api-access-vv7sm\") pod \"certified-operators-hqxjj\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:36 crc kubenswrapper[4982]: I0224 15:40:36.987350 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:37 crc kubenswrapper[4982]: I0224 15:40:37.504717 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqxjj"] Feb 24 15:40:37 crc kubenswrapper[4982]: I0224 15:40:37.559821 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqxjj" event={"ID":"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a","Type":"ContainerStarted","Data":"b8aa6f87a2812a2e8596fafc0b67219c6637d9eb44e417dd688b50727602b70e"} Feb 24 15:40:38 crc kubenswrapper[4982]: I0224 15:40:38.576635 4982 generic.go:334] "Generic (PLEG): container finished" podID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerID="6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d" exitCode=0 Feb 24 15:40:38 crc kubenswrapper[4982]: I0224 15:40:38.576699 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqxjj" event={"ID":"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a","Type":"ContainerDied","Data":"6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d"} Feb 24 15:40:39 crc kubenswrapper[4982]: I0224 15:40:39.602240 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqxjj" event={"ID":"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a","Type":"ContainerStarted","Data":"6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9"} Feb 24 15:40:41 crc kubenswrapper[4982]: I0224 15:40:41.637397 4982 generic.go:334] "Generic (PLEG): container finished" podID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerID="6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9" exitCode=0 Feb 24 15:40:41 crc kubenswrapper[4982]: I0224 15:40:41.637537 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqxjj" event={"ID":"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a","Type":"ContainerDied","Data":"6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9"} Feb 24 15:40:42 crc kubenswrapper[4982]: I0224 15:40:42.660134 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqxjj" event={"ID":"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a","Type":"ContainerStarted","Data":"a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528"} Feb 24 15:40:42 crc kubenswrapper[4982]: I0224 15:40:42.689151 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hqxjj" podStartSLOduration=3.217962147 podStartE2EDuration="6.68913199s" podCreationTimestamp="2026-02-24 15:40:36 +0000 UTC" firstStartedPulling="2026-02-24 15:40:38.579557363 +0000 UTC m=+3100.198615866" lastFinishedPulling="2026-02-24 15:40:42.050727216 +0000 UTC m=+3103.669785709" observedRunningTime="2026-02-24 15:40:42.682295764 +0000 UTC m=+3104.301354267" watchObservedRunningTime="2026-02-24 15:40:42.68913199 +0000 UTC m=+3104.308190493" Feb 24 15:40:46 crc kubenswrapper[4982]: I0224 15:40:46.988617 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:46 crc kubenswrapper[4982]: I0224 15:40:46.989884 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:47 crc kubenswrapper[4982]: I0224 15:40:47.083901 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:47 crc kubenswrapper[4982]: I0224 15:40:47.812483 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:47 crc kubenswrapper[4982]: I0224 15:40:47.882857 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqxjj"] Feb 24 15:40:48 crc kubenswrapper[4982]: I0224 15:40:48.583324 4982 scope.go:117] "RemoveContainer" containerID="ed3d43cfffa6ba6bfff55034febca814d94b5f3a737ed14712acee947d05e217" Feb 24 15:40:49 crc kubenswrapper[4982]: I0224 15:40:49.754111 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hqxjj" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerName="registry-server" containerID="cri-o://a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528" gracePeriod=2 Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.291920 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.413021 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv7sm\" (UniqueName: \"kubernetes.io/projected/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-kube-api-access-vv7sm\") pod \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.413120 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-catalog-content\") pod \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.413209 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-utilities\") pod \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\" (UID: \"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a\") " Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.415005 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-utilities" (OuterVolumeSpecName: "utilities") pod "40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" (UID: "40ec5f24-1a8f-4231-8c0c-66fe84f4e81a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.431253 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-kube-api-access-vv7sm" (OuterVolumeSpecName: "kube-api-access-vv7sm") pod "40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" (UID: "40ec5f24-1a8f-4231-8c0c-66fe84f4e81a"). InnerVolumeSpecName "kube-api-access-vv7sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.502560 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" (UID: "40ec5f24-1a8f-4231-8c0c-66fe84f4e81a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.518959 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv7sm\" (UniqueName: \"kubernetes.io/projected/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-kube-api-access-vv7sm\") on node \"crc\" DevicePath \"\"" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.519026 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.519046 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.772938 4982 generic.go:334] "Generic (PLEG): container finished" podID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerID="a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528" exitCode=0 Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.773045 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqxjj" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.773040 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqxjj" event={"ID":"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a","Type":"ContainerDied","Data":"a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528"} Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.773927 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqxjj" event={"ID":"40ec5f24-1a8f-4231-8c0c-66fe84f4e81a","Type":"ContainerDied","Data":"b8aa6f87a2812a2e8596fafc0b67219c6637d9eb44e417dd688b50727602b70e"} Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.773986 4982 scope.go:117] "RemoveContainer" containerID="a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.813405 4982 scope.go:117] "RemoveContainer" containerID="6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.834791 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqxjj"] Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.848464 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hqxjj"] Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.864579 4982 scope.go:117] "RemoveContainer" containerID="6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.929902 4982 scope.go:117] "RemoveContainer" containerID="a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528" Feb 24 15:40:50 crc kubenswrapper[4982]: E0224 15:40:50.930349 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528\": container with ID starting with a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528 not found: ID does not exist" containerID="a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.930409 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528"} err="failed to get container status \"a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528\": rpc error: code = NotFound desc = could not find container \"a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528\": container with ID starting with a6ad3d3972a3e62aa216250c01acd246e616471b7179f4b14aa0c8d6502ad528 not found: ID does not exist" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.930442 4982 scope.go:117] "RemoveContainer" containerID="6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9" Feb 24 15:40:50 crc kubenswrapper[4982]: E0224 15:40:50.931095 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9\": container with ID starting with 6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9 not found: ID does not exist" containerID="6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.931158 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9"} err="failed to get container status \"6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9\": rpc error: code = NotFound desc = could not find container \"6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9\": container with ID starting with 6273c22ddfbba916bc8355fbccf0a0004e860887e3a8a422087487a9f6d318b9 not found: ID does not exist" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.931196 4982 scope.go:117] "RemoveContainer" containerID="6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d" Feb 24 15:40:50 crc kubenswrapper[4982]: E0224 15:40:50.931836 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d\": container with ID starting with 6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d not found: ID does not exist" containerID="6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d" Feb 24 15:40:50 crc kubenswrapper[4982]: I0224 15:40:50.931878 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d"} err="failed to get container status \"6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d\": rpc error: code = NotFound desc = could not find container \"6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d\": container with ID starting with 6b783466499892da1dc4b244e96e2a1833e8d5b7f9e2e0a9282322b37f02a62d not found: ID does not exist" Feb 24 15:40:51 crc kubenswrapper[4982]: I0224 15:40:51.161776 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" path="/var/lib/kubelet/pods/40ec5f24-1a8f-4231-8c0c-66fe84f4e81a/volumes" Feb 24 15:41:08 crc kubenswrapper[4982]: I0224 15:41:08.738727 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:41:08 crc kubenswrapper[4982]: I0224 15:41:08.739281 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:41:34 crc kubenswrapper[4982]: I0224 15:41:34.353816 4982 generic.go:334] "Generic (PLEG): container finished" podID="eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" containerID="a58ca48b7182814089f35e29b28724dc991f728077151e48a4ac52a93c37914a" exitCode=0 Feb 24 15:41:34 crc kubenswrapper[4982]: I0224 15:41:34.353909 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" event={"ID":"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd","Type":"ContainerDied","Data":"a58ca48b7182814089f35e29b28724dc991f728077151e48a4ac52a93c37914a"} Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.886539 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.948950 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-inventory\") pod \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.949003 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwq5f\" (UniqueName: \"kubernetes.io/projected/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-kube-api-access-zwq5f\") pod \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.949120 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-0\") pod \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.949177 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-telemetry-power-monitoring-combined-ca-bundle\") pod \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.949279 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-2\") pod \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.949298 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-1\") pod \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.949325 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ssh-key-openstack-edpm-ipam\") pod \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\" (UID: \"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd\") " Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.967604 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" (UID: "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.967814 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-kube-api-access-zwq5f" (OuterVolumeSpecName: "kube-api-access-zwq5f") pod "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" (UID: "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd"). InnerVolumeSpecName "kube-api-access-zwq5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.986424 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" (UID: "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.986689 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" (UID: "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:35 crc kubenswrapper[4982]: I0224 15:41:35.992520 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" (UID: "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.002379 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-inventory" (OuterVolumeSpecName: "inventory") pod "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" (UID: "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.010618 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" (UID: "eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.051538 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.051785 4982 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.051885 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.051969 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.052049 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.052126 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.052208 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwq5f\" (UniqueName: \"kubernetes.io/projected/eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd-kube-api-access-zwq5f\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.376985 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" event={"ID":"eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd","Type":"ContainerDied","Data":"6dc1961b2094702db96476384a678608f11858195337192bc634312dd2cfa1bf"} Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.377040 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc1961b2094702db96476384a678608f11858195337192bc634312dd2cfa1bf" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.377302 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.502105 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q"] Feb 24 15:41:36 crc kubenswrapper[4982]: E0224 15:41:36.502537 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerName="extract-utilities" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.502554 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerName="extract-utilities" Feb 24 15:41:36 crc kubenswrapper[4982]: E0224 15:41:36.502573 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.502582 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 24 15:41:36 crc kubenswrapper[4982]: E0224 15:41:36.502602 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerName="registry-server" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.502609 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerName="registry-server" Feb 24 15:41:36 crc kubenswrapper[4982]: E0224 15:41:36.502628 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerName="extract-content" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.502634 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerName="extract-content" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.502837 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ec5f24-1a8f-4231-8c0c-66fe84f4e81a" containerName="registry-server" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.502865 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.503637 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.507274 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.508303 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4l4m" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.508516 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.508756 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.509936 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.550899 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q"] Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.564339 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.564419 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.564462 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.564567 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzw4b\" (UniqueName: \"kubernetes.io/projected/dd768711-30f7-4520-845c-c2f7c45a7c6b-kube-api-access-zzw4b\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.564666 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.667078 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.667201 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzw4b\" (UniqueName: \"kubernetes.io/projected/dd768711-30f7-4520-845c-c2f7c45a7c6b-kube-api-access-zzw4b\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.667299 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.667469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.667553 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.671128 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.671212 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.671338 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.672227 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.682510 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzw4b\" (UniqueName: \"kubernetes.io/projected/dd768711-30f7-4520-845c-c2f7c45a7c6b-kube-api-access-zzw4b\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wx72q\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:36 crc kubenswrapper[4982]: I0224 15:41:36.845964 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:37 crc kubenswrapper[4982]: I0224 15:41:37.426737 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q"] Feb 24 15:41:37 crc kubenswrapper[4982]: I0224 15:41:37.431039 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:41:38 crc kubenswrapper[4982]: I0224 15:41:38.414354 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" event={"ID":"dd768711-30f7-4520-845c-c2f7c45a7c6b","Type":"ContainerStarted","Data":"e88a4c388b54b2ff9ad805c129407d79dc8c7ff09f921209a0694b3d67bb89b1"} Feb 24 15:41:38 crc kubenswrapper[4982]: I0224 15:41:38.414740 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" event={"ID":"dd768711-30f7-4520-845c-c2f7c45a7c6b","Type":"ContainerStarted","Data":"d00650c28fdc400d5e35d038d71d09fb74de8af8d8b79d5002a94cd2872b30da"} Feb 24 15:41:38 crc kubenswrapper[4982]: I0224 15:41:38.444639 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" podStartSLOduration=1.989314829 podStartE2EDuration="2.444610099s" podCreationTimestamp="2026-02-24 15:41:36 +0000 UTC" firstStartedPulling="2026-02-24 15:41:37.430658819 +0000 UTC m=+3159.049717342" lastFinishedPulling="2026-02-24 15:41:37.885954109 +0000 UTC m=+3159.505012612" observedRunningTime="2026-02-24 15:41:38.435845119 +0000 UTC m=+3160.054903622" watchObservedRunningTime="2026-02-24 15:41:38.444610099 +0000 UTC m=+3160.063668632" Feb 24 15:41:38 crc kubenswrapper[4982]: I0224 15:41:38.738446 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:41:38 crc kubenswrapper[4982]: I0224 15:41:38.738574 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.730631 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9svzz"] Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.734071 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.758404 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9svzz"] Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.783940 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjxp\" (UniqueName: \"kubernetes.io/projected/03693509-fa32-4d05-879e-b7d5489ad991-kube-api-access-fcjxp\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.784095 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-utilities\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.784194 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-catalog-content\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.885988 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcjxp\" (UniqueName: \"kubernetes.io/projected/03693509-fa32-4d05-879e-b7d5489ad991-kube-api-access-fcjxp\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.886134 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-utilities\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.886237 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-catalog-content\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.886689 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-utilities\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.886826 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-catalog-content\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:47 crc kubenswrapper[4982]: I0224 15:41:47.905745 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcjxp\" (UniqueName: \"kubernetes.io/projected/03693509-fa32-4d05-879e-b7d5489ad991-kube-api-access-fcjxp\") pod \"redhat-operators-9svzz\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:48 crc kubenswrapper[4982]: I0224 15:41:48.069438 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:48 crc kubenswrapper[4982]: I0224 15:41:48.596286 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9svzz"] Feb 24 15:41:49 crc kubenswrapper[4982]: I0224 15:41:49.558478 4982 generic.go:334] "Generic (PLEG): container finished" podID="03693509-fa32-4d05-879e-b7d5489ad991" containerID="02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba" exitCode=0 Feb 24 15:41:49 crc kubenswrapper[4982]: I0224 15:41:49.558583 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svzz" event={"ID":"03693509-fa32-4d05-879e-b7d5489ad991","Type":"ContainerDied","Data":"02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba"} Feb 24 15:41:49 crc kubenswrapper[4982]: I0224 15:41:49.558757 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svzz" event={"ID":"03693509-fa32-4d05-879e-b7d5489ad991","Type":"ContainerStarted","Data":"4522cbf2e4653b63c72247350bcc6235719005cedb176b5ee7005cd7fa185462"} Feb 24 15:41:50 crc kubenswrapper[4982]: I0224 15:41:50.577872 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svzz" event={"ID":"03693509-fa32-4d05-879e-b7d5489ad991","Type":"ContainerStarted","Data":"dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73"} Feb 24 15:41:53 crc kubenswrapper[4982]: I0224 15:41:53.612688 4982 generic.go:334] "Generic (PLEG): container finished" podID="dd768711-30f7-4520-845c-c2f7c45a7c6b" containerID="e88a4c388b54b2ff9ad805c129407d79dc8c7ff09f921209a0694b3d67bb89b1" exitCode=0 Feb 24 15:41:53 crc kubenswrapper[4982]: I0224 15:41:53.612811 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" event={"ID":"dd768711-30f7-4520-845c-c2f7c45a7c6b","Type":"ContainerDied","Data":"e88a4c388b54b2ff9ad805c129407d79dc8c7ff09f921209a0694b3d67bb89b1"} Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.148814 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.192158 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-inventory\") pod \"dd768711-30f7-4520-845c-c2f7c45a7c6b\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.192254 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzw4b\" (UniqueName: \"kubernetes.io/projected/dd768711-30f7-4520-845c-c2f7c45a7c6b-kube-api-access-zzw4b\") pod \"dd768711-30f7-4520-845c-c2f7c45a7c6b\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.192423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-1\") pod \"dd768711-30f7-4520-845c-c2f7c45a7c6b\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.192475 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-0\") pod \"dd768711-30f7-4520-845c-c2f7c45a7c6b\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.192561 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-ssh-key-openstack-edpm-ipam\") pod \"dd768711-30f7-4520-845c-c2f7c45a7c6b\" (UID: \"dd768711-30f7-4520-845c-c2f7c45a7c6b\") " Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.202973 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd768711-30f7-4520-845c-c2f7c45a7c6b-kube-api-access-zzw4b" (OuterVolumeSpecName: "kube-api-access-zzw4b") pod "dd768711-30f7-4520-845c-c2f7c45a7c6b" (UID: "dd768711-30f7-4520-845c-c2f7c45a7c6b"). InnerVolumeSpecName "kube-api-access-zzw4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.228340 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-inventory" (OuterVolumeSpecName: "inventory") pod "dd768711-30f7-4520-845c-c2f7c45a7c6b" (UID: "dd768711-30f7-4520-845c-c2f7c45a7c6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.239249 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "dd768711-30f7-4520-845c-c2f7c45a7c6b" (UID: "dd768711-30f7-4520-845c-c2f7c45a7c6b"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.242317 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "dd768711-30f7-4520-845c-c2f7c45a7c6b" (UID: "dd768711-30f7-4520-845c-c2f7c45a7c6b"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.248313 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd768711-30f7-4520-845c-c2f7c45a7c6b" (UID: "dd768711-30f7-4520-845c-c2f7c45a7c6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.296633 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.296674 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-inventory\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.296688 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzw4b\" (UniqueName: \"kubernetes.io/projected/dd768711-30f7-4520-845c-c2f7c45a7c6b-kube-api-access-zzw4b\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.296701 4982 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.296717 4982 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd768711-30f7-4520-845c-c2f7c45a7c6b-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.647095 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" event={"ID":"dd768711-30f7-4520-845c-c2f7c45a7c6b","Type":"ContainerDied","Data":"d00650c28fdc400d5e35d038d71d09fb74de8af8d8b79d5002a94cd2872b30da"} Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.647365 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00650c28fdc400d5e35d038d71d09fb74de8af8d8b79d5002a94cd2872b30da" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.647449 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wx72q" Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.661011 4982 generic.go:334] "Generic (PLEG): container finished" podID="03693509-fa32-4d05-879e-b7d5489ad991" containerID="dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73" exitCode=0 Feb 24 15:41:55 crc kubenswrapper[4982]: I0224 15:41:55.661103 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svzz" event={"ID":"03693509-fa32-4d05-879e-b7d5489ad991","Type":"ContainerDied","Data":"dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73"} Feb 24 15:41:56 crc kubenswrapper[4982]: I0224 15:41:56.676395 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svzz" event={"ID":"03693509-fa32-4d05-879e-b7d5489ad991","Type":"ContainerStarted","Data":"4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29"} Feb 24 15:41:56 crc kubenswrapper[4982]: I0224 15:41:56.734561 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9svzz" podStartSLOduration=3.181297379 podStartE2EDuration="9.734535135s" podCreationTimestamp="2026-02-24 15:41:47 +0000 UTC" firstStartedPulling="2026-02-24 15:41:49.577032785 +0000 UTC m=+3171.196091318" lastFinishedPulling="2026-02-24 15:41:56.130270571 +0000 UTC m=+3177.749329074" observedRunningTime="2026-02-24 15:41:56.712703951 +0000 UTC m=+3178.331762454" watchObservedRunningTime="2026-02-24 15:41:56.734535135 +0000 UTC m=+3178.353593648" Feb 24 15:41:58 crc kubenswrapper[4982]: I0224 15:41:58.071057 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:58 crc kubenswrapper[4982]: I0224 15:41:58.071359 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:41:59 crc kubenswrapper[4982]: I0224 15:41:59.168164 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9svzz" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="registry-server" probeResult="failure" output=< Feb 24 15:41:59 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:41:59 crc kubenswrapper[4982]: > Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.166553 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532462-p998c"] Feb 24 15:42:00 crc kubenswrapper[4982]: E0224 15:42:00.167193 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd768711-30f7-4520-845c-c2f7c45a7c6b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.167215 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd768711-30f7-4520-845c-c2f7c45a7c6b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.167557 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd768711-30f7-4520-845c-c2f7c45a7c6b" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.168593 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532462-p998c" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.173897 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.174295 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.174419 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.181183 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532462-p998c"] Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.230116 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdk6h\" (UniqueName: \"kubernetes.io/projected/8765e41d-6b9b-4707-bfc0-0d5a9faf5327-kube-api-access-xdk6h\") pod \"auto-csr-approver-29532462-p998c\" (UID: \"8765e41d-6b9b-4707-bfc0-0d5a9faf5327\") " pod="openshift-infra/auto-csr-approver-29532462-p998c" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.332089 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdk6h\" (UniqueName: \"kubernetes.io/projected/8765e41d-6b9b-4707-bfc0-0d5a9faf5327-kube-api-access-xdk6h\") pod \"auto-csr-approver-29532462-p998c\" (UID: \"8765e41d-6b9b-4707-bfc0-0d5a9faf5327\") " pod="openshift-infra/auto-csr-approver-29532462-p998c" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.349153 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdk6h\" (UniqueName: \"kubernetes.io/projected/8765e41d-6b9b-4707-bfc0-0d5a9faf5327-kube-api-access-xdk6h\") pod \"auto-csr-approver-29532462-p998c\" (UID: \"8765e41d-6b9b-4707-bfc0-0d5a9faf5327\") " pod="openshift-infra/auto-csr-approver-29532462-p998c" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.490008 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532462-p998c" Feb 24 15:42:00 crc kubenswrapper[4982]: I0224 15:42:00.981722 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532462-p998c"] Feb 24 15:42:01 crc kubenswrapper[4982]: I0224 15:42:01.742749 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532462-p998c" event={"ID":"8765e41d-6b9b-4707-bfc0-0d5a9faf5327","Type":"ContainerStarted","Data":"d72e7f910b77ae085e969f336e6e59e03b37faae04829d2559b9206a32c269f7"} Feb 24 15:42:02 crc kubenswrapper[4982]: I0224 15:42:02.755178 4982 generic.go:334] "Generic (PLEG): container finished" podID="8765e41d-6b9b-4707-bfc0-0d5a9faf5327" containerID="1c37409df9bf36d75456e6402d41525ab72ee5f8c5b5a066755a5c5fedfaaadb" exitCode=0 Feb 24 15:42:02 crc kubenswrapper[4982]: I0224 15:42:02.755290 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532462-p998c" event={"ID":"8765e41d-6b9b-4707-bfc0-0d5a9faf5327","Type":"ContainerDied","Data":"1c37409df9bf36d75456e6402d41525ab72ee5f8c5b5a066755a5c5fedfaaadb"} Feb 24 15:42:04 crc kubenswrapper[4982]: I0224 15:42:04.219662 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532462-p998c" Feb 24 15:42:04 crc kubenswrapper[4982]: I0224 15:42:04.327061 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdk6h\" (UniqueName: \"kubernetes.io/projected/8765e41d-6b9b-4707-bfc0-0d5a9faf5327-kube-api-access-xdk6h\") pod \"8765e41d-6b9b-4707-bfc0-0d5a9faf5327\" (UID: \"8765e41d-6b9b-4707-bfc0-0d5a9faf5327\") " Feb 24 15:42:04 crc kubenswrapper[4982]: I0224 15:42:04.336054 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8765e41d-6b9b-4707-bfc0-0d5a9faf5327-kube-api-access-xdk6h" (OuterVolumeSpecName: "kube-api-access-xdk6h") pod "8765e41d-6b9b-4707-bfc0-0d5a9faf5327" (UID: "8765e41d-6b9b-4707-bfc0-0d5a9faf5327"). InnerVolumeSpecName "kube-api-access-xdk6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:42:04 crc kubenswrapper[4982]: I0224 15:42:04.430972 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdk6h\" (UniqueName: \"kubernetes.io/projected/8765e41d-6b9b-4707-bfc0-0d5a9faf5327-kube-api-access-xdk6h\") on node \"crc\" DevicePath \"\"" Feb 24 15:42:04 crc kubenswrapper[4982]: I0224 15:42:04.780006 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532462-p998c" event={"ID":"8765e41d-6b9b-4707-bfc0-0d5a9faf5327","Type":"ContainerDied","Data":"d72e7f910b77ae085e969f336e6e59e03b37faae04829d2559b9206a32c269f7"} Feb 24 15:42:04 crc kubenswrapper[4982]: I0224 15:42:04.780048 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d72e7f910b77ae085e969f336e6e59e03b37faae04829d2559b9206a32c269f7" Feb 24 15:42:04 crc kubenswrapper[4982]: I0224 15:42:04.780070 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532462-p998c" Feb 24 15:42:05 crc kubenswrapper[4982]: I0224 15:42:05.350040 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532456-kxhlm"] Feb 24 15:42:05 crc kubenswrapper[4982]: I0224 15:42:05.360422 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532456-kxhlm"] Feb 24 15:42:07 crc kubenswrapper[4982]: I0224 15:42:07.166558 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4790e210-1fed-44a0-8f36-ac6e40b342b6" path="/var/lib/kubelet/pods/4790e210-1fed-44a0-8f36-ac6e40b342b6/volumes" Feb 24 15:42:08 crc kubenswrapper[4982]: I0224 15:42:08.124246 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:42:08 crc kubenswrapper[4982]: I0224 15:42:08.200640 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:42:08 crc kubenswrapper[4982]: I0224 15:42:08.376379 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9svzz"] Feb 24 15:42:08 crc kubenswrapper[4982]: I0224 15:42:08.738257 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:42:08 crc kubenswrapper[4982]: I0224 15:42:08.738326 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:42:08 crc kubenswrapper[4982]: I0224 15:42:08.738374 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:42:08 crc kubenswrapper[4982]: I0224 15:42:08.739344 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:42:08 crc kubenswrapper[4982]: I0224 15:42:08.739417 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" gracePeriod=600 Feb 24 15:42:08 crc kubenswrapper[4982]: E0224 15:42:08.875308 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:42:09 crc kubenswrapper[4982]: I0224 15:42:09.843281 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" exitCode=0 Feb 24 15:42:09 crc kubenswrapper[4982]: I0224 15:42:09.843372 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9"} Feb 24 15:42:09 crc kubenswrapper[4982]: I0224 15:42:09.843940 4982 scope.go:117] "RemoveContainer" containerID="bf6256e0be4e6c2433d64e926d1b0b4f0e8eaa1115b4db80d6492b104ec63749" Feb 24 15:42:09 crc kubenswrapper[4982]: I0224 15:42:09.845058 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9svzz" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="registry-server" containerID="cri-o://4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29" gracePeriod=2 Feb 24 15:42:09 crc kubenswrapper[4982]: I0224 15:42:09.846263 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:42:09 crc kubenswrapper[4982]: E0224 15:42:09.848601 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.417417 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.606754 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-catalog-content\") pod \"03693509-fa32-4d05-879e-b7d5489ad991\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.607207 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-utilities\") pod \"03693509-fa32-4d05-879e-b7d5489ad991\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.607259 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcjxp\" (UniqueName: \"kubernetes.io/projected/03693509-fa32-4d05-879e-b7d5489ad991-kube-api-access-fcjxp\") pod \"03693509-fa32-4d05-879e-b7d5489ad991\" (UID: \"03693509-fa32-4d05-879e-b7d5489ad991\") " Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.609053 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-utilities" (OuterVolumeSpecName: "utilities") pod "03693509-fa32-4d05-879e-b7d5489ad991" (UID: "03693509-fa32-4d05-879e-b7d5489ad991"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.615797 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03693509-fa32-4d05-879e-b7d5489ad991-kube-api-access-fcjxp" (OuterVolumeSpecName: "kube-api-access-fcjxp") pod "03693509-fa32-4d05-879e-b7d5489ad991" (UID: "03693509-fa32-4d05-879e-b7d5489ad991"). InnerVolumeSpecName "kube-api-access-fcjxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.709538 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.709571 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcjxp\" (UniqueName: \"kubernetes.io/projected/03693509-fa32-4d05-879e-b7d5489ad991-kube-api-access-fcjxp\") on node \"crc\" DevicePath \"\"" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.727981 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03693509-fa32-4d05-879e-b7d5489ad991" (UID: "03693509-fa32-4d05-879e-b7d5489ad991"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.811870 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03693509-fa32-4d05-879e-b7d5489ad991-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.860675 4982 generic.go:334] "Generic (PLEG): container finished" podID="03693509-fa32-4d05-879e-b7d5489ad991" containerID="4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29" exitCode=0 Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.860737 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svzz" event={"ID":"03693509-fa32-4d05-879e-b7d5489ad991","Type":"ContainerDied","Data":"4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29"} Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.860781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svzz" event={"ID":"03693509-fa32-4d05-879e-b7d5489ad991","Type":"ContainerDied","Data":"4522cbf2e4653b63c72247350bcc6235719005cedb176b5ee7005cd7fa185462"} Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.860793 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9svzz" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.860810 4982 scope.go:117] "RemoveContainer" containerID="4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.893984 4982 scope.go:117] "RemoveContainer" containerID="dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.918160 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9svzz"] Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.924732 4982 scope.go:117] "RemoveContainer" containerID="02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.929142 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9svzz"] Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.984027 4982 scope.go:117] "RemoveContainer" containerID="4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29" Feb 24 15:42:10 crc kubenswrapper[4982]: E0224 15:42:10.984829 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29\": container with ID starting with 4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29 not found: ID does not exist" containerID="4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.984885 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29"} err="failed to get container status \"4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29\": rpc error: code = NotFound desc = could not find container \"4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29\": container with ID starting with 4972f8c9d45fa3563c60325e6ff411c50005e87f7df610fb9511ffdbd2411e29 not found: ID does not exist" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.984917 4982 scope.go:117] "RemoveContainer" containerID="dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73" Feb 24 15:42:10 crc kubenswrapper[4982]: E0224 15:42:10.985543 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73\": container with ID starting with dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73 not found: ID does not exist" containerID="dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.985573 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73"} err="failed to get container status \"dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73\": rpc error: code = NotFound desc = could not find container \"dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73\": container with ID starting with dac7b1f90cf8673852a8df4f1a75a034d03f5e5480ae980bfcbdbcd18c3aca73 not found: ID does not exist" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.985595 4982 scope.go:117] "RemoveContainer" containerID="02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba" Feb 24 15:42:10 crc kubenswrapper[4982]: E0224 15:42:10.985828 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba\": container with ID starting with 02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba not found: ID does not exist" containerID="02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba" Feb 24 15:42:10 crc kubenswrapper[4982]: I0224 15:42:10.985869 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba"} err="failed to get container status \"02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba\": rpc error: code = NotFound desc = could not find container \"02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba\": container with ID starting with 02bb7c175e14923e4b8d2d66877543b4a120823ab4dc536999b1968d5def69ba not found: ID does not exist" Feb 24 15:42:11 crc kubenswrapper[4982]: I0224 15:42:11.162005 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03693509-fa32-4d05-879e-b7d5489ad991" path="/var/lib/kubelet/pods/03693509-fa32-4d05-879e-b7d5489ad991/volumes" Feb 24 15:42:24 crc kubenswrapper[4982]: I0224 15:42:24.145824 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:42:24 crc kubenswrapper[4982]: E0224 15:42:24.146740 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:42:38 crc kubenswrapper[4982]: I0224 15:42:38.146740 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:42:38 crc kubenswrapper[4982]: E0224 15:42:38.147637 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:42:48 crc kubenswrapper[4982]: I0224 15:42:48.725909 4982 scope.go:117] "RemoveContainer" containerID="4525a92bac1ef3e30a40592c371737913bf6804e470ee2859d88c53a17162733" Feb 24 15:42:52 crc kubenswrapper[4982]: I0224 15:42:52.146144 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:42:52 crc kubenswrapper[4982]: E0224 15:42:52.147784 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:42:59 crc kubenswrapper[4982]: E0224 15:42:59.604280 4982 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:38846->38.102.83.50:38677: write tcp 38.102.83.50:38846->38.102.83.50:38677: write: broken pipe Feb 24 15:43:05 crc kubenswrapper[4982]: I0224 15:43:05.146959 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:43:05 crc kubenswrapper[4982]: E0224 15:43:05.149256 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:43:07 crc kubenswrapper[4982]: E0224 15:43:07.087661 4982 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:54430->38.102.83.50:38677: write tcp 38.102.83.50:54430->38.102.83.50:38677: write: connection reset by peer Feb 24 15:43:19 crc kubenswrapper[4982]: I0224 15:43:19.156198 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:43:19 crc kubenswrapper[4982]: E0224 15:43:19.157782 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:43:34 crc kubenswrapper[4982]: I0224 15:43:34.146743 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:43:34 crc kubenswrapper[4982]: E0224 15:43:34.147540 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:43:45 crc kubenswrapper[4982]: I0224 15:43:45.147662 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:43:45 crc kubenswrapper[4982]: E0224 15:43:45.148889 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.418025 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jnxc7"] Feb 24 15:43:49 crc kubenswrapper[4982]: E0224 15:43:49.419599 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="extract-utilities" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.419623 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="extract-utilities" Feb 24 15:43:49 crc kubenswrapper[4982]: E0224 15:43:49.419652 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8765e41d-6b9b-4707-bfc0-0d5a9faf5327" containerName="oc" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.419665 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8765e41d-6b9b-4707-bfc0-0d5a9faf5327" containerName="oc" Feb 24 15:43:49 crc kubenswrapper[4982]: E0224 15:43:49.419712 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="extract-content" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.419726 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="extract-content" Feb 24 15:43:49 crc kubenswrapper[4982]: E0224 15:43:49.419752 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="registry-server" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.419765 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="registry-server" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.420258 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="03693509-fa32-4d05-879e-b7d5489ad991" containerName="registry-server" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.420288 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8765e41d-6b9b-4707-bfc0-0d5a9faf5327" containerName="oc" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.423143 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.437370 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnxc7"] Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.589926 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwch9\" (UniqueName: \"kubernetes.io/projected/404be72a-9d1c-4b27-8eac-9b7eefb42c46-kube-api-access-xwch9\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.590286 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-catalog-content\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.590487 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-utilities\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.692343 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-utilities\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.692484 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwch9\" (UniqueName: \"kubernetes.io/projected/404be72a-9d1c-4b27-8eac-9b7eefb42c46-kube-api-access-xwch9\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.692604 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-catalog-content\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.693135 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-utilities\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.693256 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-catalog-content\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.726425 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwch9\" (UniqueName: \"kubernetes.io/projected/404be72a-9d1c-4b27-8eac-9b7eefb42c46-kube-api-access-xwch9\") pod \"redhat-marketplace-jnxc7\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:49 crc kubenswrapper[4982]: I0224 15:43:49.745017 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:50 crc kubenswrapper[4982]: I0224 15:43:50.219299 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnxc7"] Feb 24 15:43:51 crc kubenswrapper[4982]: I0224 15:43:51.206644 4982 generic.go:334] "Generic (PLEG): container finished" podID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerID="65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19" exitCode=0 Feb 24 15:43:51 crc kubenswrapper[4982]: I0224 15:43:51.206696 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnxc7" event={"ID":"404be72a-9d1c-4b27-8eac-9b7eefb42c46","Type":"ContainerDied","Data":"65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19"} Feb 24 15:43:51 crc kubenswrapper[4982]: I0224 15:43:51.206943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnxc7" event={"ID":"404be72a-9d1c-4b27-8eac-9b7eefb42c46","Type":"ContainerStarted","Data":"156eb1c72d493f851b0cbcc2b4f946b49249fb7dfc498a09e39b8b9b0973df3b"} Feb 24 15:43:53 crc kubenswrapper[4982]: I0224 15:43:53.234443 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnxc7" event={"ID":"404be72a-9d1c-4b27-8eac-9b7eefb42c46","Type":"ContainerStarted","Data":"d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f"} Feb 24 15:43:54 crc kubenswrapper[4982]: I0224 15:43:54.248674 4982 generic.go:334] "Generic (PLEG): container finished" podID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerID="d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f" exitCode=0 Feb 24 15:43:54 crc kubenswrapper[4982]: I0224 15:43:54.248777 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnxc7" event={"ID":"404be72a-9d1c-4b27-8eac-9b7eefb42c46","Type":"ContainerDied","Data":"d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f"} Feb 24 15:43:55 crc kubenswrapper[4982]: I0224 15:43:55.265343 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnxc7" event={"ID":"404be72a-9d1c-4b27-8eac-9b7eefb42c46","Type":"ContainerStarted","Data":"8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f"} Feb 24 15:43:55 crc kubenswrapper[4982]: I0224 15:43:55.306732 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jnxc7" podStartSLOduration=2.857853332 podStartE2EDuration="6.306708413s" podCreationTimestamp="2026-02-24 15:43:49 +0000 UTC" firstStartedPulling="2026-02-24 15:43:51.209366417 +0000 UTC m=+3292.828424950" lastFinishedPulling="2026-02-24 15:43:54.658221548 +0000 UTC m=+3296.277280031" observedRunningTime="2026-02-24 15:43:55.293843074 +0000 UTC m=+3296.912901587" watchObservedRunningTime="2026-02-24 15:43:55.306708413 +0000 UTC m=+3296.925766906" Feb 24 15:43:59 crc kubenswrapper[4982]: I0224 15:43:59.745386 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:59 crc kubenswrapper[4982]: I0224 15:43:59.746010 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:43:59 crc kubenswrapper[4982]: I0224 15:43:59.833187 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.146224 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:44:00 crc kubenswrapper[4982]: E0224 15:44:00.146862 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.213448 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532464-ghqk2"] Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.215967 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.218829 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.219081 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.222169 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.232802 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532464-ghqk2"] Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.371956 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t7dg\" (UniqueName: \"kubernetes.io/projected/86be1c41-d00b-4512-8272-77ec5988bf71-kube-api-access-9t7dg\") pod \"auto-csr-approver-29532464-ghqk2\" (UID: \"86be1c41-d00b-4512-8272-77ec5988bf71\") " pod="openshift-infra/auto-csr-approver-29532464-ghqk2" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.396833 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.461099 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnxc7"] Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.474599 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t7dg\" (UniqueName: \"kubernetes.io/projected/86be1c41-d00b-4512-8272-77ec5988bf71-kube-api-access-9t7dg\") pod \"auto-csr-approver-29532464-ghqk2\" (UID: \"86be1c41-d00b-4512-8272-77ec5988bf71\") " pod="openshift-infra/auto-csr-approver-29532464-ghqk2" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.496068 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t7dg\" (UniqueName: \"kubernetes.io/projected/86be1c41-d00b-4512-8272-77ec5988bf71-kube-api-access-9t7dg\") pod \"auto-csr-approver-29532464-ghqk2\" (UID: \"86be1c41-d00b-4512-8272-77ec5988bf71\") " pod="openshift-infra/auto-csr-approver-29532464-ghqk2" Feb 24 15:44:00 crc kubenswrapper[4982]: I0224 15:44:00.543868 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" Feb 24 15:44:01 crc kubenswrapper[4982]: I0224 15:44:01.033827 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532464-ghqk2"] Feb 24 15:44:01 crc kubenswrapper[4982]: I0224 15:44:01.350010 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" event={"ID":"86be1c41-d00b-4512-8272-77ec5988bf71","Type":"ContainerStarted","Data":"4661da73a13b99435a1d5ac72e3a24e18885d4cd15606ab5bb3a1601ef107f7c"} Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.360954 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jnxc7" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerName="registry-server" containerID="cri-o://8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f" gracePeriod=2 Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.361397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" event={"ID":"86be1c41-d00b-4512-8272-77ec5988bf71","Type":"ContainerStarted","Data":"5093d2b5d4c71411e6568ec6212396b081edac9b4fb4be3dd6367d01f7e47b66"} Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.392118 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" podStartSLOduration=1.558130531 podStartE2EDuration="2.3920705s" podCreationTimestamp="2026-02-24 15:44:00 +0000 UTC" firstStartedPulling="2026-02-24 15:44:01.033036051 +0000 UTC m=+3302.652094584" lastFinishedPulling="2026-02-24 15:44:01.86697606 +0000 UTC m=+3303.486034553" observedRunningTime="2026-02-24 15:44:02.37810632 +0000 UTC m=+3303.997164813" watchObservedRunningTime="2026-02-24 15:44:02.3920705 +0000 UTC m=+3304.011128993" Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.885184 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.933555 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwch9\" (UniqueName: \"kubernetes.io/projected/404be72a-9d1c-4b27-8eac-9b7eefb42c46-kube-api-access-xwch9\") pod \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.933702 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-utilities\") pod \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.933958 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-catalog-content\") pod \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\" (UID: \"404be72a-9d1c-4b27-8eac-9b7eefb42c46\") " Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.938929 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-utilities" (OuterVolumeSpecName: "utilities") pod "404be72a-9d1c-4b27-8eac-9b7eefb42c46" (UID: "404be72a-9d1c-4b27-8eac-9b7eefb42c46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.955942 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/404be72a-9d1c-4b27-8eac-9b7eefb42c46-kube-api-access-xwch9" (OuterVolumeSpecName: "kube-api-access-xwch9") pod "404be72a-9d1c-4b27-8eac-9b7eefb42c46" (UID: "404be72a-9d1c-4b27-8eac-9b7eefb42c46"). InnerVolumeSpecName "kube-api-access-xwch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:44:02 crc kubenswrapper[4982]: I0224 15:44:02.960733 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "404be72a-9d1c-4b27-8eac-9b7eefb42c46" (UID: "404be72a-9d1c-4b27-8eac-9b7eefb42c46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.037037 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.037344 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwch9\" (UniqueName: \"kubernetes.io/projected/404be72a-9d1c-4b27-8eac-9b7eefb42c46-kube-api-access-xwch9\") on node \"crc\" DevicePath \"\"" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.037420 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/404be72a-9d1c-4b27-8eac-9b7eefb42c46-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.373185 4982 generic.go:334] "Generic (PLEG): container finished" podID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerID="8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f" exitCode=0 Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.373263 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnxc7" event={"ID":"404be72a-9d1c-4b27-8eac-9b7eefb42c46","Type":"ContainerDied","Data":"8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f"} Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.373302 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnxc7" event={"ID":"404be72a-9d1c-4b27-8eac-9b7eefb42c46","Type":"ContainerDied","Data":"156eb1c72d493f851b0cbcc2b4f946b49249fb7dfc498a09e39b8b9b0973df3b"} Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.373319 4982 scope.go:117] "RemoveContainer" containerID="8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.373463 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnxc7" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.376720 4982 generic.go:334] "Generic (PLEG): container finished" podID="86be1c41-d00b-4512-8272-77ec5988bf71" containerID="5093d2b5d4c71411e6568ec6212396b081edac9b4fb4be3dd6367d01f7e47b66" exitCode=0 Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.376773 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" event={"ID":"86be1c41-d00b-4512-8272-77ec5988bf71","Type":"ContainerDied","Data":"5093d2b5d4c71411e6568ec6212396b081edac9b4fb4be3dd6367d01f7e47b66"} Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.413866 4982 scope.go:117] "RemoveContainer" containerID="d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.419117 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnxc7"] Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.441759 4982 scope.go:117] "RemoveContainer" containerID="65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.450717 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnxc7"] Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.505688 4982 scope.go:117] "RemoveContainer" containerID="8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f" Feb 24 15:44:03 crc kubenswrapper[4982]: E0224 15:44:03.506028 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f\": container with ID starting with 8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f not found: ID does not exist" containerID="8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.506065 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f"} err="failed to get container status \"8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f\": rpc error: code = NotFound desc = could not find container \"8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f\": container with ID starting with 8923fe9122a066e4b2e55816f2c2421e9449490b3ca9423acd425aafeab6e18f not found: ID does not exist" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.506091 4982 scope.go:117] "RemoveContainer" containerID="d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f" Feb 24 15:44:03 crc kubenswrapper[4982]: E0224 15:44:03.506376 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f\": container with ID starting with d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f not found: ID does not exist" containerID="d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.507045 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f"} err="failed to get container status \"d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f\": rpc error: code = NotFound desc = could not find container \"d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f\": container with ID starting with d902cacc2c2ef235b817cd45df6fefd5fd97718cc8987c5bf1d4d071227d283f not found: ID does not exist" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.507066 4982 scope.go:117] "RemoveContainer" containerID="65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19" Feb 24 15:44:03 crc kubenswrapper[4982]: E0224 15:44:03.507528 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19\": container with ID starting with 65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19 not found: ID does not exist" containerID="65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19" Feb 24 15:44:03 crc kubenswrapper[4982]: I0224 15:44:03.507567 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19"} err="failed to get container status \"65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19\": rpc error: code = NotFound desc = could not find container \"65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19\": container with ID starting with 65a95c05bc1e8c8ed641f7abcc13eb834953969cff80d663bfc1144673875b19 not found: ID does not exist" Feb 24 15:44:04 crc kubenswrapper[4982]: I0224 15:44:04.806751 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" Feb 24 15:44:04 crc kubenswrapper[4982]: I0224 15:44:04.889565 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t7dg\" (UniqueName: \"kubernetes.io/projected/86be1c41-d00b-4512-8272-77ec5988bf71-kube-api-access-9t7dg\") pod \"86be1c41-d00b-4512-8272-77ec5988bf71\" (UID: \"86be1c41-d00b-4512-8272-77ec5988bf71\") " Feb 24 15:44:04 crc kubenswrapper[4982]: I0224 15:44:04.903677 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86be1c41-d00b-4512-8272-77ec5988bf71-kube-api-access-9t7dg" (OuterVolumeSpecName: "kube-api-access-9t7dg") pod "86be1c41-d00b-4512-8272-77ec5988bf71" (UID: "86be1c41-d00b-4512-8272-77ec5988bf71"). InnerVolumeSpecName "kube-api-access-9t7dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:44:04 crc kubenswrapper[4982]: I0224 15:44:04.994517 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t7dg\" (UniqueName: \"kubernetes.io/projected/86be1c41-d00b-4512-8272-77ec5988bf71-kube-api-access-9t7dg\") on node \"crc\" DevicePath \"\"" Feb 24 15:44:05 crc kubenswrapper[4982]: I0224 15:44:05.162591 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" path="/var/lib/kubelet/pods/404be72a-9d1c-4b27-8eac-9b7eefb42c46/volumes" Feb 24 15:44:05 crc kubenswrapper[4982]: I0224 15:44:05.406377 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" event={"ID":"86be1c41-d00b-4512-8272-77ec5988bf71","Type":"ContainerDied","Data":"4661da73a13b99435a1d5ac72e3a24e18885d4cd15606ab5bb3a1601ef107f7c"} Feb 24 15:44:05 crc kubenswrapper[4982]: I0224 15:44:05.406438 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4661da73a13b99435a1d5ac72e3a24e18885d4cd15606ab5bb3a1601ef107f7c" Feb 24 15:44:05 crc kubenswrapper[4982]: I0224 15:44:05.406565 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532464-ghqk2" Feb 24 15:44:05 crc kubenswrapper[4982]: I0224 15:44:05.468728 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532458-jfvd9"] Feb 24 15:44:05 crc kubenswrapper[4982]: I0224 15:44:05.481871 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532458-jfvd9"] Feb 24 15:44:07 crc kubenswrapper[4982]: I0224 15:44:07.164236 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5bf17f1-b2a2-4341-92fc-7711a71e8766" path="/var/lib/kubelet/pods/e5bf17f1-b2a2-4341-92fc-7711a71e8766/volumes" Feb 24 15:44:12 crc kubenswrapper[4982]: I0224 15:44:12.145371 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:44:12 crc kubenswrapper[4982]: E0224 15:44:12.146049 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:44:23 crc kubenswrapper[4982]: I0224 15:44:23.145376 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:44:23 crc kubenswrapper[4982]: E0224 15:44:23.146220 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:44:36 crc kubenswrapper[4982]: I0224 15:44:36.146205 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:44:36 crc kubenswrapper[4982]: E0224 15:44:36.147219 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:44:48 crc kubenswrapper[4982]: I0224 15:44:48.906719 4982 scope.go:117] "RemoveContainer" containerID="4ef029c85945ddaca6c1362b1c2f3e3e8cbd2ae89c61ffaeb5e6a06c2fe51701" Feb 24 15:44:50 crc kubenswrapper[4982]: I0224 15:44:50.145453 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:44:50 crc kubenswrapper[4982]: E0224 15:44:50.146328 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.164616 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n"] Feb 24 15:45:00 crc kubenswrapper[4982]: E0224 15:45:00.165959 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86be1c41-d00b-4512-8272-77ec5988bf71" containerName="oc" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.165979 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="86be1c41-d00b-4512-8272-77ec5988bf71" containerName="oc" Feb 24 15:45:00 crc kubenswrapper[4982]: E0224 15:45:00.166002 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerName="extract-utilities" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.166011 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerName="extract-utilities" Feb 24 15:45:00 crc kubenswrapper[4982]: E0224 15:45:00.166029 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerName="registry-server" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.166039 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerName="registry-server" Feb 24 15:45:00 crc kubenswrapper[4982]: E0224 15:45:00.166061 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerName="extract-content" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.166069 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerName="extract-content" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.166392 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="404be72a-9d1c-4b27-8eac-9b7eefb42c46" containerName="registry-server" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.166424 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="86be1c41-d00b-4512-8272-77ec5988bf71" containerName="oc" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.167784 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.172164 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.172482 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.176572 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n"] Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.269951 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlm6\" (UniqueName: \"kubernetes.io/projected/7f08c468-b7cc-4ca7-bc1c-750894e12286-kube-api-access-nqlm6\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.270372 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f08c468-b7cc-4ca7-bc1c-750894e12286-secret-volume\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.270421 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f08c468-b7cc-4ca7-bc1c-750894e12286-config-volume\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.372790 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f08c468-b7cc-4ca7-bc1c-750894e12286-secret-volume\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.372902 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f08c468-b7cc-4ca7-bc1c-750894e12286-config-volume\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.373212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlm6\" (UniqueName: \"kubernetes.io/projected/7f08c468-b7cc-4ca7-bc1c-750894e12286-kube-api-access-nqlm6\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.375005 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f08c468-b7cc-4ca7-bc1c-750894e12286-config-volume\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.386181 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f08c468-b7cc-4ca7-bc1c-750894e12286-secret-volume\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.395350 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlm6\" (UniqueName: \"kubernetes.io/projected/7f08c468-b7cc-4ca7-bc1c-750894e12286-kube-api-access-nqlm6\") pod \"collect-profiles-29532465-45w9n\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.497769 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:00 crc kubenswrapper[4982]: I0224 15:45:00.999065 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n"] Feb 24 15:45:01 crc kubenswrapper[4982]: I0224 15:45:01.192262 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" event={"ID":"7f08c468-b7cc-4ca7-bc1c-750894e12286","Type":"ContainerStarted","Data":"92f7d16bb4a1ae6744131a7e4f32640fc3bc1fabe35211ac0e027d5742f82b5d"} Feb 24 15:45:01 crc kubenswrapper[4982]: I0224 15:45:01.192669 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" event={"ID":"7f08c468-b7cc-4ca7-bc1c-750894e12286","Type":"ContainerStarted","Data":"e5b3615018fd2d10a8cff6dbc7e5e116258c1a089946c31b372ba799c758fb54"} Feb 24 15:45:01 crc kubenswrapper[4982]: I0224 15:45:01.206878 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" podStartSLOduration=1.206860658 podStartE2EDuration="1.206860658s" podCreationTimestamp="2026-02-24 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 15:45:01.206420466 +0000 UTC m=+3362.825478959" watchObservedRunningTime="2026-02-24 15:45:01.206860658 +0000 UTC m=+3362.825919151" Feb 24 15:45:02 crc kubenswrapper[4982]: I0224 15:45:02.206133 4982 generic.go:334] "Generic (PLEG): container finished" podID="7f08c468-b7cc-4ca7-bc1c-750894e12286" containerID="92f7d16bb4a1ae6744131a7e4f32640fc3bc1fabe35211ac0e027d5742f82b5d" exitCode=0 Feb 24 15:45:02 crc kubenswrapper[4982]: I0224 15:45:02.206218 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" event={"ID":"7f08c468-b7cc-4ca7-bc1c-750894e12286","Type":"ContainerDied","Data":"92f7d16bb4a1ae6744131a7e4f32640fc3bc1fabe35211ac0e027d5742f82b5d"} Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.618072 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.758539 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f08c468-b7cc-4ca7-bc1c-750894e12286-config-volume\") pod \"7f08c468-b7cc-4ca7-bc1c-750894e12286\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.759382 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f08c468-b7cc-4ca7-bc1c-750894e12286-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f08c468-b7cc-4ca7-bc1c-750894e12286" (UID: "7f08c468-b7cc-4ca7-bc1c-750894e12286"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.759452 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqlm6\" (UniqueName: \"kubernetes.io/projected/7f08c468-b7cc-4ca7-bc1c-750894e12286-kube-api-access-nqlm6\") pod \"7f08c468-b7cc-4ca7-bc1c-750894e12286\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.759624 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f08c468-b7cc-4ca7-bc1c-750894e12286-secret-volume\") pod \"7f08c468-b7cc-4ca7-bc1c-750894e12286\" (UID: \"7f08c468-b7cc-4ca7-bc1c-750894e12286\") " Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.760256 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f08c468-b7cc-4ca7-bc1c-750894e12286-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.765594 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f08c468-b7cc-4ca7-bc1c-750894e12286-kube-api-access-nqlm6" (OuterVolumeSpecName: "kube-api-access-nqlm6") pod "7f08c468-b7cc-4ca7-bc1c-750894e12286" (UID: "7f08c468-b7cc-4ca7-bc1c-750894e12286"). InnerVolumeSpecName "kube-api-access-nqlm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.767586 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f08c468-b7cc-4ca7-bc1c-750894e12286-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f08c468-b7cc-4ca7-bc1c-750894e12286" (UID: "7f08c468-b7cc-4ca7-bc1c-750894e12286"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.863318 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqlm6\" (UniqueName: \"kubernetes.io/projected/7f08c468-b7cc-4ca7-bc1c-750894e12286-kube-api-access-nqlm6\") on node \"crc\" DevicePath \"\"" Feb 24 15:45:03 crc kubenswrapper[4982]: I0224 15:45:03.863350 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f08c468-b7cc-4ca7-bc1c-750894e12286-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 15:45:04 crc kubenswrapper[4982]: I0224 15:45:04.236048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" event={"ID":"7f08c468-b7cc-4ca7-bc1c-750894e12286","Type":"ContainerDied","Data":"e5b3615018fd2d10a8cff6dbc7e5e116258c1a089946c31b372ba799c758fb54"} Feb 24 15:45:04 crc kubenswrapper[4982]: I0224 15:45:04.236113 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5b3615018fd2d10a8cff6dbc7e5e116258c1a089946c31b372ba799c758fb54" Feb 24 15:45:04 crc kubenswrapper[4982]: I0224 15:45:04.236231 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n" Feb 24 15:45:04 crc kubenswrapper[4982]: I0224 15:45:04.322723 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf"] Feb 24 15:45:04 crc kubenswrapper[4982]: I0224 15:45:04.332730 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532420-vw9gf"] Feb 24 15:45:05 crc kubenswrapper[4982]: I0224 15:45:05.148709 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:45:05 crc kubenswrapper[4982]: E0224 15:45:05.149890 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:45:05 crc kubenswrapper[4982]: I0224 15:45:05.165932 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe339d91-5ca8-43ef-ab0a-552ebfc10fec" path="/var/lib/kubelet/pods/fe339d91-5ca8-43ef-ab0a-552ebfc10fec/volumes" Feb 24 15:45:18 crc kubenswrapper[4982]: I0224 15:45:18.146491 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:45:18 crc kubenswrapper[4982]: E0224 15:45:18.147971 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:45:31 crc kubenswrapper[4982]: I0224 15:45:31.146975 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:45:31 crc kubenswrapper[4982]: E0224 15:45:31.147882 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:45:42 crc kubenswrapper[4982]: I0224 15:45:42.146804 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:45:42 crc kubenswrapper[4982]: E0224 15:45:42.147766 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:45:49 crc kubenswrapper[4982]: I0224 15:45:49.023133 4982 scope.go:117] "RemoveContainer" containerID="5bba91b626ad3219dcc2ccac55a24ac4389bad36882189f71a14e9f00a383bad" Feb 24 15:45:53 crc kubenswrapper[4982]: I0224 15:45:53.146050 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:45:53 crc kubenswrapper[4982]: E0224 15:45:53.147083 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.162589 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532466-8lxf9"] Feb 24 15:46:00 crc kubenswrapper[4982]: E0224 15:46:00.163860 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f08c468-b7cc-4ca7-bc1c-750894e12286" containerName="collect-profiles" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.163878 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f08c468-b7cc-4ca7-bc1c-750894e12286" containerName="collect-profiles" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.164187 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f08c468-b7cc-4ca7-bc1c-750894e12286" containerName="collect-profiles" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.165134 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532466-8lxf9" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.168408 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.168646 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.171479 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.174627 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532466-8lxf9"] Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.223906 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg6qh\" (UniqueName: \"kubernetes.io/projected/3e50a491-45fb-49bc-8a5c-fc2d52edd85f-kube-api-access-dg6qh\") pod \"auto-csr-approver-29532466-8lxf9\" (UID: \"3e50a491-45fb-49bc-8a5c-fc2d52edd85f\") " pod="openshift-infra/auto-csr-approver-29532466-8lxf9" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.327216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg6qh\" (UniqueName: \"kubernetes.io/projected/3e50a491-45fb-49bc-8a5c-fc2d52edd85f-kube-api-access-dg6qh\") pod \"auto-csr-approver-29532466-8lxf9\" (UID: \"3e50a491-45fb-49bc-8a5c-fc2d52edd85f\") " pod="openshift-infra/auto-csr-approver-29532466-8lxf9" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.349083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg6qh\" (UniqueName: \"kubernetes.io/projected/3e50a491-45fb-49bc-8a5c-fc2d52edd85f-kube-api-access-dg6qh\") pod \"auto-csr-approver-29532466-8lxf9\" (UID: \"3e50a491-45fb-49bc-8a5c-fc2d52edd85f\") " pod="openshift-infra/auto-csr-approver-29532466-8lxf9" Feb 24 15:46:00 crc kubenswrapper[4982]: I0224 15:46:00.488356 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532466-8lxf9" Feb 24 15:46:01 crc kubenswrapper[4982]: I0224 15:46:01.535374 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532466-8lxf9"] Feb 24 15:46:01 crc kubenswrapper[4982]: W0224 15:46:01.535958 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e50a491_45fb_49bc_8a5c_fc2d52edd85f.slice/crio-04a2045ed7fe0f60e1cf5ffa7e0299761ec62976efdd2b6f681da11b5ca6bee9 WatchSource:0}: Error finding container 04a2045ed7fe0f60e1cf5ffa7e0299761ec62976efdd2b6f681da11b5ca6bee9: Status 404 returned error can't find the container with id 04a2045ed7fe0f60e1cf5ffa7e0299761ec62976efdd2b6f681da11b5ca6bee9 Feb 24 15:46:01 crc kubenswrapper[4982]: I0224 15:46:01.912838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532466-8lxf9" event={"ID":"3e50a491-45fb-49bc-8a5c-fc2d52edd85f","Type":"ContainerStarted","Data":"04a2045ed7fe0f60e1cf5ffa7e0299761ec62976efdd2b6f681da11b5ca6bee9"} Feb 24 15:46:03 crc kubenswrapper[4982]: I0224 15:46:03.954707 4982 generic.go:334] "Generic (PLEG): container finished" podID="3e50a491-45fb-49bc-8a5c-fc2d52edd85f" containerID="ec076ec2a4737dd0e7f83da43ff592162241cb63b425f3f4bef8f039fde6dde1" exitCode=0 Feb 24 15:46:03 crc kubenswrapper[4982]: I0224 15:46:03.954805 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532466-8lxf9" event={"ID":"3e50a491-45fb-49bc-8a5c-fc2d52edd85f","Type":"ContainerDied","Data":"ec076ec2a4737dd0e7f83da43ff592162241cb63b425f3f4bef8f039fde6dde1"} Feb 24 15:46:05 crc kubenswrapper[4982]: I0224 15:46:05.509201 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532466-8lxf9" Feb 24 15:46:05 crc kubenswrapper[4982]: I0224 15:46:05.559516 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg6qh\" (UniqueName: \"kubernetes.io/projected/3e50a491-45fb-49bc-8a5c-fc2d52edd85f-kube-api-access-dg6qh\") pod \"3e50a491-45fb-49bc-8a5c-fc2d52edd85f\" (UID: \"3e50a491-45fb-49bc-8a5c-fc2d52edd85f\") " Feb 24 15:46:05 crc kubenswrapper[4982]: I0224 15:46:05.566392 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e50a491-45fb-49bc-8a5c-fc2d52edd85f-kube-api-access-dg6qh" (OuterVolumeSpecName: "kube-api-access-dg6qh") pod "3e50a491-45fb-49bc-8a5c-fc2d52edd85f" (UID: "3e50a491-45fb-49bc-8a5c-fc2d52edd85f"). InnerVolumeSpecName "kube-api-access-dg6qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:46:05 crc kubenswrapper[4982]: I0224 15:46:05.662565 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg6qh\" (UniqueName: \"kubernetes.io/projected/3e50a491-45fb-49bc-8a5c-fc2d52edd85f-kube-api-access-dg6qh\") on node \"crc\" DevicePath \"\"" Feb 24 15:46:05 crc kubenswrapper[4982]: I0224 15:46:05.991562 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532466-8lxf9" event={"ID":"3e50a491-45fb-49bc-8a5c-fc2d52edd85f","Type":"ContainerDied","Data":"04a2045ed7fe0f60e1cf5ffa7e0299761ec62976efdd2b6f681da11b5ca6bee9"} Feb 24 15:46:05 crc kubenswrapper[4982]: I0224 15:46:05.991624 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a2045ed7fe0f60e1cf5ffa7e0299761ec62976efdd2b6f681da11b5ca6bee9" Feb 24 15:46:05 crc kubenswrapper[4982]: I0224 15:46:05.991645 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532466-8lxf9" Feb 24 15:46:06 crc kubenswrapper[4982]: I0224 15:46:06.593543 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532460-f8rxs"] Feb 24 15:46:06 crc kubenswrapper[4982]: I0224 15:46:06.606127 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532460-f8rxs"] Feb 24 15:46:07 crc kubenswrapper[4982]: I0224 15:46:07.164226 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c7c372-a351-4958-93a9-40356bdfe6a4" path="/var/lib/kubelet/pods/a7c7c372-a351-4958-93a9-40356bdfe6a4/volumes" Feb 24 15:46:08 crc kubenswrapper[4982]: I0224 15:46:08.146347 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:46:08 crc kubenswrapper[4982]: E0224 15:46:08.146994 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:46:20 crc kubenswrapper[4982]: I0224 15:46:20.147188 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:46:20 crc kubenswrapper[4982]: E0224 15:46:20.148053 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:46:32 crc kubenswrapper[4982]: I0224 15:46:32.146463 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:46:32 crc kubenswrapper[4982]: E0224 15:46:32.147680 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:46:47 crc kubenswrapper[4982]: I0224 15:46:47.145892 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:46:47 crc kubenswrapper[4982]: E0224 15:46:47.146762 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:46:49 crc kubenswrapper[4982]: I0224 15:46:49.096534 4982 scope.go:117] "RemoveContainer" containerID="c2f28db5f69ae4e1a74997cb27882d2a803358573d534a5561b27bc026e91a2c" Feb 24 15:46:58 crc kubenswrapper[4982]: I0224 15:46:58.145980 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:46:58 crc kubenswrapper[4982]: E0224 15:46:58.146790 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:47:10 crc kubenswrapper[4982]: I0224 15:47:10.146981 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:47:10 crc kubenswrapper[4982]: I0224 15:47:10.808433 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"34ed4eece1be40eaaa7c3e64e060c17a554c39745fde6c1810caa6b910b562a6"} Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.328299 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-85g7j"] Feb 24 15:47:15 crc kubenswrapper[4982]: E0224 15:47:15.329521 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e50a491-45fb-49bc-8a5c-fc2d52edd85f" containerName="oc" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.329537 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e50a491-45fb-49bc-8a5c-fc2d52edd85f" containerName="oc" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.330004 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e50a491-45fb-49bc-8a5c-fc2d52edd85f" containerName="oc" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.332127 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.345914 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85g7j"] Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.363854 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-catalog-content\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.363894 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-utilities\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.363924 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57txf\" (UniqueName: \"kubernetes.io/projected/484fef48-0d45-4f66-9075-c2d318d85c74-kube-api-access-57txf\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.466805 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-catalog-content\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.466876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-utilities\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.466912 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57txf\" (UniqueName: \"kubernetes.io/projected/484fef48-0d45-4f66-9075-c2d318d85c74-kube-api-access-57txf\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.467819 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-catalog-content\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.467863 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-utilities\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.487091 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57txf\" (UniqueName: \"kubernetes.io/projected/484fef48-0d45-4f66-9075-c2d318d85c74-kube-api-access-57txf\") pod \"community-operators-85g7j\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:15 crc kubenswrapper[4982]: I0224 15:47:15.663788 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:16 crc kubenswrapper[4982]: I0224 15:47:16.388836 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85g7j"] Feb 24 15:47:16 crc kubenswrapper[4982]: I0224 15:47:16.882581 4982 generic.go:334] "Generic (PLEG): container finished" podID="484fef48-0d45-4f66-9075-c2d318d85c74" containerID="c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73" exitCode=0 Feb 24 15:47:16 crc kubenswrapper[4982]: I0224 15:47:16.882684 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85g7j" event={"ID":"484fef48-0d45-4f66-9075-c2d318d85c74","Type":"ContainerDied","Data":"c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73"} Feb 24 15:47:16 crc kubenswrapper[4982]: I0224 15:47:16.883564 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85g7j" event={"ID":"484fef48-0d45-4f66-9075-c2d318d85c74","Type":"ContainerStarted","Data":"fe069bd3606d755aa731f56a81a0fcb79fdc49aee32d9f7164b220730c26a795"} Feb 24 15:47:16 crc kubenswrapper[4982]: I0224 15:47:16.886767 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:47:17 crc kubenswrapper[4982]: I0224 15:47:17.897908 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85g7j" event={"ID":"484fef48-0d45-4f66-9075-c2d318d85c74","Type":"ContainerStarted","Data":"996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04"} Feb 24 15:47:19 crc kubenswrapper[4982]: I0224 15:47:19.945493 4982 generic.go:334] "Generic (PLEG): container finished" podID="484fef48-0d45-4f66-9075-c2d318d85c74" containerID="996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04" exitCode=0 Feb 24 15:47:19 crc kubenswrapper[4982]: I0224 15:47:19.945590 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85g7j" event={"ID":"484fef48-0d45-4f66-9075-c2d318d85c74","Type":"ContainerDied","Data":"996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04"} Feb 24 15:47:20 crc kubenswrapper[4982]: I0224 15:47:20.957960 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85g7j" event={"ID":"484fef48-0d45-4f66-9075-c2d318d85c74","Type":"ContainerStarted","Data":"2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7"} Feb 24 15:47:20 crc kubenswrapper[4982]: I0224 15:47:20.981537 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-85g7j" podStartSLOduration=2.519862153 podStartE2EDuration="5.981519528s" podCreationTimestamp="2026-02-24 15:47:15 +0000 UTC" firstStartedPulling="2026-02-24 15:47:16.886546974 +0000 UTC m=+3498.505605467" lastFinishedPulling="2026-02-24 15:47:20.348204349 +0000 UTC m=+3501.967262842" observedRunningTime="2026-02-24 15:47:20.981164647 +0000 UTC m=+3502.600223150" watchObservedRunningTime="2026-02-24 15:47:20.981519528 +0000 UTC m=+3502.600578031" Feb 24 15:47:25 crc kubenswrapper[4982]: I0224 15:47:25.664963 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:25 crc kubenswrapper[4982]: I0224 15:47:25.665704 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:26 crc kubenswrapper[4982]: I0224 15:47:26.731194 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-85g7j" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="registry-server" probeResult="failure" output=< Feb 24 15:47:26 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:47:26 crc kubenswrapper[4982]: > Feb 24 15:47:35 crc kubenswrapper[4982]: I0224 15:47:35.725365 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:35 crc kubenswrapper[4982]: I0224 15:47:35.789279 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:35 crc kubenswrapper[4982]: I0224 15:47:35.976147 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-85g7j"] Feb 24 15:47:37 crc kubenswrapper[4982]: I0224 15:47:37.182473 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-85g7j" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="registry-server" containerID="cri-o://2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7" gracePeriod=2 Feb 24 15:47:37 crc kubenswrapper[4982]: I0224 15:47:37.747133 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:37 crc kubenswrapper[4982]: I0224 15:47:37.900478 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57txf\" (UniqueName: \"kubernetes.io/projected/484fef48-0d45-4f66-9075-c2d318d85c74-kube-api-access-57txf\") pod \"484fef48-0d45-4f66-9075-c2d318d85c74\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " Feb 24 15:47:37 crc kubenswrapper[4982]: I0224 15:47:37.900657 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-utilities\") pod \"484fef48-0d45-4f66-9075-c2d318d85c74\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " Feb 24 15:47:37 crc kubenswrapper[4982]: I0224 15:47:37.900815 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-catalog-content\") pod \"484fef48-0d45-4f66-9075-c2d318d85c74\" (UID: \"484fef48-0d45-4f66-9075-c2d318d85c74\") " Feb 24 15:47:37 crc kubenswrapper[4982]: I0224 15:47:37.901439 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-utilities" (OuterVolumeSpecName: "utilities") pod "484fef48-0d45-4f66-9075-c2d318d85c74" (UID: "484fef48-0d45-4f66-9075-c2d318d85c74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:47:37 crc kubenswrapper[4982]: I0224 15:47:37.912800 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484fef48-0d45-4f66-9075-c2d318d85c74-kube-api-access-57txf" (OuterVolumeSpecName: "kube-api-access-57txf") pod "484fef48-0d45-4f66-9075-c2d318d85c74" (UID: "484fef48-0d45-4f66-9075-c2d318d85c74"). InnerVolumeSpecName "kube-api-access-57txf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:47:37 crc kubenswrapper[4982]: I0224 15:47:37.965198 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "484fef48-0d45-4f66-9075-c2d318d85c74" (UID: "484fef48-0d45-4f66-9075-c2d318d85c74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.004690 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57txf\" (UniqueName: \"kubernetes.io/projected/484fef48-0d45-4f66-9075-c2d318d85c74-kube-api-access-57txf\") on node \"crc\" DevicePath \"\"" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.004750 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.004770 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fef48-0d45-4f66-9075-c2d318d85c74-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.198353 4982 generic.go:334] "Generic (PLEG): container finished" podID="484fef48-0d45-4f66-9075-c2d318d85c74" containerID="2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7" exitCode=0 Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.198430 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85g7j" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.198434 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85g7j" event={"ID":"484fef48-0d45-4f66-9075-c2d318d85c74","Type":"ContainerDied","Data":"2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7"} Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.198598 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85g7j" event={"ID":"484fef48-0d45-4f66-9075-c2d318d85c74","Type":"ContainerDied","Data":"fe069bd3606d755aa731f56a81a0fcb79fdc49aee32d9f7164b220730c26a795"} Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.198688 4982 scope.go:117] "RemoveContainer" containerID="2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.243403 4982 scope.go:117] "RemoveContainer" containerID="996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.283903 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-85g7j"] Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.291341 4982 scope.go:117] "RemoveContainer" containerID="c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.296316 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-85g7j"] Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.376064 4982 scope.go:117] "RemoveContainer" containerID="2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7" Feb 24 15:47:38 crc kubenswrapper[4982]: E0224 15:47:38.376755 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7\": container with ID starting with 2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7 not found: ID does not exist" containerID="2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.376849 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7"} err="failed to get container status \"2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7\": rpc error: code = NotFound desc = could not find container \"2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7\": container with ID starting with 2082d554fd031d776182b6239d7336ec0450841acbf9f2eb35c0ba18507aede7 not found: ID does not exist" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.376932 4982 scope.go:117] "RemoveContainer" containerID="996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04" Feb 24 15:47:38 crc kubenswrapper[4982]: E0224 15:47:38.377448 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04\": container with ID starting with 996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04 not found: ID does not exist" containerID="996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.377570 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04"} err="failed to get container status \"996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04\": rpc error: code = NotFound desc = could not find container \"996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04\": container with ID starting with 996582e6002da828c737fb489a7e3f11a2b032a45720c9fcf27887f86e0cda04 not found: ID does not exist" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.377653 4982 scope.go:117] "RemoveContainer" containerID="c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73" Feb 24 15:47:38 crc kubenswrapper[4982]: E0224 15:47:38.378020 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73\": container with ID starting with c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73 not found: ID does not exist" containerID="c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73" Feb 24 15:47:38 crc kubenswrapper[4982]: I0224 15:47:38.378052 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73"} err="failed to get container status \"c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73\": rpc error: code = NotFound desc = could not find container \"c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73\": container with ID starting with c23ae81224ea12ef98f223e8839ee7c5e5f45e80078b116869e3cd59ffae4e73 not found: ID does not exist" Feb 24 15:47:39 crc kubenswrapper[4982]: I0224 15:47:39.172986 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" path="/var/lib/kubelet/pods/484fef48-0d45-4f66-9075-c2d318d85c74/volumes" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.147945 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532468-cqmlt"] Feb 24 15:48:00 crc kubenswrapper[4982]: E0224 15:48:00.148885 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="extract-content" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.148901 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="extract-content" Feb 24 15:48:00 crc kubenswrapper[4982]: E0224 15:48:00.148933 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="registry-server" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.148942 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="registry-server" Feb 24 15:48:00 crc kubenswrapper[4982]: E0224 15:48:00.148980 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="extract-utilities" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.148989 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="extract-utilities" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.149344 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="484fef48-0d45-4f66-9075-c2d318d85c74" containerName="registry-server" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.150344 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.152358 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.152697 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.153825 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.160394 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532468-cqmlt"] Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.307668 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g56nf\" (UniqueName: \"kubernetes.io/projected/3384fe6e-2102-4e04-b4ce-f7d6c4377f9e-kube-api-access-g56nf\") pod \"auto-csr-approver-29532468-cqmlt\" (UID: \"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e\") " pod="openshift-infra/auto-csr-approver-29532468-cqmlt" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.410543 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g56nf\" (UniqueName: \"kubernetes.io/projected/3384fe6e-2102-4e04-b4ce-f7d6c4377f9e-kube-api-access-g56nf\") pod \"auto-csr-approver-29532468-cqmlt\" (UID: \"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e\") " pod="openshift-infra/auto-csr-approver-29532468-cqmlt" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.445737 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g56nf\" (UniqueName: \"kubernetes.io/projected/3384fe6e-2102-4e04-b4ce-f7d6c4377f9e-kube-api-access-g56nf\") pod \"auto-csr-approver-29532468-cqmlt\" (UID: \"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e\") " pod="openshift-infra/auto-csr-approver-29532468-cqmlt" Feb 24 15:48:00 crc kubenswrapper[4982]: I0224 15:48:00.473054 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" Feb 24 15:48:01 crc kubenswrapper[4982]: I0224 15:48:01.027737 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532468-cqmlt"] Feb 24 15:48:01 crc kubenswrapper[4982]: I0224 15:48:01.496964 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" event={"ID":"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e","Type":"ContainerStarted","Data":"b05dc7dabfd9a0e8e49f7b8a83078235d63ed7793874d236cff703836a97634f"} Feb 24 15:48:04 crc kubenswrapper[4982]: I0224 15:48:04.541625 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" event={"ID":"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e","Type":"ContainerStarted","Data":"5f5bcec43239388151a2f293b818c03e97af6f223f6dc686cc6f1d87feb7c115"} Feb 24 15:48:04 crc kubenswrapper[4982]: I0224 15:48:04.582027 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" podStartSLOduration=1.567228551 podStartE2EDuration="4.582003944s" podCreationTimestamp="2026-02-24 15:48:00 +0000 UTC" firstStartedPulling="2026-02-24 15:48:01.033516408 +0000 UTC m=+3542.652574921" lastFinishedPulling="2026-02-24 15:48:04.048291821 +0000 UTC m=+3545.667350314" observedRunningTime="2026-02-24 15:48:04.567838561 +0000 UTC m=+3546.186897064" watchObservedRunningTime="2026-02-24 15:48:04.582003944 +0000 UTC m=+3546.201062447" Feb 24 15:48:05 crc kubenswrapper[4982]: I0224 15:48:05.553982 4982 generic.go:334] "Generic (PLEG): container finished" podID="3384fe6e-2102-4e04-b4ce-f7d6c4377f9e" containerID="5f5bcec43239388151a2f293b818c03e97af6f223f6dc686cc6f1d87feb7c115" exitCode=0 Feb 24 15:48:05 crc kubenswrapper[4982]: I0224 15:48:05.554047 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" event={"ID":"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e","Type":"ContainerDied","Data":"5f5bcec43239388151a2f293b818c03e97af6f223f6dc686cc6f1d87feb7c115"} Feb 24 15:48:06 crc kubenswrapper[4982]: I0224 15:48:06.981652 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" Feb 24 15:48:07 crc kubenswrapper[4982]: I0224 15:48:07.090044 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g56nf\" (UniqueName: \"kubernetes.io/projected/3384fe6e-2102-4e04-b4ce-f7d6c4377f9e-kube-api-access-g56nf\") pod \"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e\" (UID: \"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e\") " Feb 24 15:48:07 crc kubenswrapper[4982]: I0224 15:48:07.097053 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3384fe6e-2102-4e04-b4ce-f7d6c4377f9e-kube-api-access-g56nf" (OuterVolumeSpecName: "kube-api-access-g56nf") pod "3384fe6e-2102-4e04-b4ce-f7d6c4377f9e" (UID: "3384fe6e-2102-4e04-b4ce-f7d6c4377f9e"). InnerVolumeSpecName "kube-api-access-g56nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:48:07 crc kubenswrapper[4982]: I0224 15:48:07.193690 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g56nf\" (UniqueName: \"kubernetes.io/projected/3384fe6e-2102-4e04-b4ce-f7d6c4377f9e-kube-api-access-g56nf\") on node \"crc\" DevicePath \"\"" Feb 24 15:48:07 crc kubenswrapper[4982]: I0224 15:48:07.579383 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" event={"ID":"3384fe6e-2102-4e04-b4ce-f7d6c4377f9e","Type":"ContainerDied","Data":"b05dc7dabfd9a0e8e49f7b8a83078235d63ed7793874d236cff703836a97634f"} Feb 24 15:48:07 crc kubenswrapper[4982]: I0224 15:48:07.579424 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b05dc7dabfd9a0e8e49f7b8a83078235d63ed7793874d236cff703836a97634f" Feb 24 15:48:07 crc kubenswrapper[4982]: I0224 15:48:07.579541 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532468-cqmlt" Feb 24 15:48:07 crc kubenswrapper[4982]: I0224 15:48:07.640864 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532462-p998c"] Feb 24 15:48:07 crc kubenswrapper[4982]: I0224 15:48:07.654146 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532462-p998c"] Feb 24 15:48:09 crc kubenswrapper[4982]: I0224 15:48:09.171088 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8765e41d-6b9b-4707-bfc0-0d5a9faf5327" path="/var/lib/kubelet/pods/8765e41d-6b9b-4707-bfc0-0d5a9faf5327/volumes" Feb 24 15:48:49 crc kubenswrapper[4982]: I0224 15:48:49.240976 4982 scope.go:117] "RemoveContainer" containerID="1c37409df9bf36d75456e6402d41525ab72ee5f8c5b5a066755a5c5fedfaaadb" Feb 24 15:49:38 crc kubenswrapper[4982]: I0224 15:49:38.737717 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:49:38 crc kubenswrapper[4982]: I0224 15:49:38.738400 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.145978 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532470-plz24"] Feb 24 15:50:00 crc kubenswrapper[4982]: E0224 15:50:00.147634 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3384fe6e-2102-4e04-b4ce-f7d6c4377f9e" containerName="oc" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.147650 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3384fe6e-2102-4e04-b4ce-f7d6c4377f9e" containerName="oc" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.147924 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3384fe6e-2102-4e04-b4ce-f7d6c4377f9e" containerName="oc" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.148696 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532470-plz24" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.151991 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.152314 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.152701 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.215123 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532470-plz24"] Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.350545 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxs6j\" (UniqueName: \"kubernetes.io/projected/257d2276-89bf-47f7-b7f7-a9bfd88872a2-kube-api-access-gxs6j\") pod \"auto-csr-approver-29532470-plz24\" (UID: \"257d2276-89bf-47f7-b7f7-a9bfd88872a2\") " pod="openshift-infra/auto-csr-approver-29532470-plz24" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.454081 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxs6j\" (UniqueName: \"kubernetes.io/projected/257d2276-89bf-47f7-b7f7-a9bfd88872a2-kube-api-access-gxs6j\") pod \"auto-csr-approver-29532470-plz24\" (UID: \"257d2276-89bf-47f7-b7f7-a9bfd88872a2\") " pod="openshift-infra/auto-csr-approver-29532470-plz24" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.478180 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxs6j\" (UniqueName: \"kubernetes.io/projected/257d2276-89bf-47f7-b7f7-a9bfd88872a2-kube-api-access-gxs6j\") pod \"auto-csr-approver-29532470-plz24\" (UID: \"257d2276-89bf-47f7-b7f7-a9bfd88872a2\") " pod="openshift-infra/auto-csr-approver-29532470-plz24" Feb 24 15:50:00 crc kubenswrapper[4982]: I0224 15:50:00.767221 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532470-plz24" Feb 24 15:50:01 crc kubenswrapper[4982]: I0224 15:50:01.260389 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532470-plz24"] Feb 24 15:50:02 crc kubenswrapper[4982]: I0224 15:50:02.012363 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532470-plz24" event={"ID":"257d2276-89bf-47f7-b7f7-a9bfd88872a2","Type":"ContainerStarted","Data":"a7f8e5f21cfb7142ef2cc6a10f7fa4afad7b22d7bca5f0d4e7ff65587d28b6a4"} Feb 24 15:50:03 crc kubenswrapper[4982]: I0224 15:50:03.039409 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532470-plz24" event={"ID":"257d2276-89bf-47f7-b7f7-a9bfd88872a2","Type":"ContainerStarted","Data":"b684ff1544a5f5fbd8053fe6db4119cffee83cfc38f07ce988b5d783f869214d"} Feb 24 15:50:03 crc kubenswrapper[4982]: I0224 15:50:03.072883 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532470-plz24" podStartSLOduration=1.7390485839999998 podStartE2EDuration="3.072861968s" podCreationTimestamp="2026-02-24 15:50:00 +0000 UTC" firstStartedPulling="2026-02-24 15:50:01.26332603 +0000 UTC m=+3662.882384523" lastFinishedPulling="2026-02-24 15:50:02.597139414 +0000 UTC m=+3664.216197907" observedRunningTime="2026-02-24 15:50:03.057370889 +0000 UTC m=+3664.676429372" watchObservedRunningTime="2026-02-24 15:50:03.072861968 +0000 UTC m=+3664.691920461" Feb 24 15:50:04 crc kubenswrapper[4982]: I0224 15:50:04.052534 4982 generic.go:334] "Generic (PLEG): container finished" podID="257d2276-89bf-47f7-b7f7-a9bfd88872a2" containerID="b684ff1544a5f5fbd8053fe6db4119cffee83cfc38f07ce988b5d783f869214d" exitCode=0 Feb 24 15:50:04 crc kubenswrapper[4982]: I0224 15:50:04.052887 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532470-plz24" event={"ID":"257d2276-89bf-47f7-b7f7-a9bfd88872a2","Type":"ContainerDied","Data":"b684ff1544a5f5fbd8053fe6db4119cffee83cfc38f07ce988b5d783f869214d"} Feb 24 15:50:05 crc kubenswrapper[4982]: I0224 15:50:05.530315 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532470-plz24" Feb 24 15:50:05 crc kubenswrapper[4982]: I0224 15:50:05.607601 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxs6j\" (UniqueName: \"kubernetes.io/projected/257d2276-89bf-47f7-b7f7-a9bfd88872a2-kube-api-access-gxs6j\") pod \"257d2276-89bf-47f7-b7f7-a9bfd88872a2\" (UID: \"257d2276-89bf-47f7-b7f7-a9bfd88872a2\") " Feb 24 15:50:05 crc kubenswrapper[4982]: I0224 15:50:05.616283 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257d2276-89bf-47f7-b7f7-a9bfd88872a2-kube-api-access-gxs6j" (OuterVolumeSpecName: "kube-api-access-gxs6j") pod "257d2276-89bf-47f7-b7f7-a9bfd88872a2" (UID: "257d2276-89bf-47f7-b7f7-a9bfd88872a2"). InnerVolumeSpecName "kube-api-access-gxs6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:50:05 crc kubenswrapper[4982]: I0224 15:50:05.710333 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxs6j\" (UniqueName: \"kubernetes.io/projected/257d2276-89bf-47f7-b7f7-a9bfd88872a2-kube-api-access-gxs6j\") on node \"crc\" DevicePath \"\"" Feb 24 15:50:06 crc kubenswrapper[4982]: I0224 15:50:06.085800 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532470-plz24" event={"ID":"257d2276-89bf-47f7-b7f7-a9bfd88872a2","Type":"ContainerDied","Data":"a7f8e5f21cfb7142ef2cc6a10f7fa4afad7b22d7bca5f0d4e7ff65587d28b6a4"} Feb 24 15:50:06 crc kubenswrapper[4982]: I0224 15:50:06.085852 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7f8e5f21cfb7142ef2cc6a10f7fa4afad7b22d7bca5f0d4e7ff65587d28b6a4" Feb 24 15:50:06 crc kubenswrapper[4982]: I0224 15:50:06.085917 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532470-plz24" Feb 24 15:50:06 crc kubenswrapper[4982]: I0224 15:50:06.150726 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532464-ghqk2"] Feb 24 15:50:06 crc kubenswrapper[4982]: I0224 15:50:06.163465 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532464-ghqk2"] Feb 24 15:50:07 crc kubenswrapper[4982]: I0224 15:50:07.168781 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86be1c41-d00b-4512-8272-77ec5988bf71" path="/var/lib/kubelet/pods/86be1c41-d00b-4512-8272-77ec5988bf71/volumes" Feb 24 15:50:08 crc kubenswrapper[4982]: I0224 15:50:08.738153 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:50:08 crc kubenswrapper[4982]: I0224 15:50:08.738205 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:50:38 crc kubenswrapper[4982]: I0224 15:50:38.738569 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:50:38 crc kubenswrapper[4982]: I0224 15:50:38.738992 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:50:38 crc kubenswrapper[4982]: I0224 15:50:38.739052 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:50:38 crc kubenswrapper[4982]: I0224 15:50:38.740148 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34ed4eece1be40eaaa7c3e64e060c17a554c39745fde6c1810caa6b910b562a6"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:50:38 crc kubenswrapper[4982]: I0224 15:50:38.740220 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://34ed4eece1be40eaaa7c3e64e060c17a554c39745fde6c1810caa6b910b562a6" gracePeriod=600 Feb 24 15:50:39 crc kubenswrapper[4982]: I0224 15:50:39.521031 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="34ed4eece1be40eaaa7c3e64e060c17a554c39745fde6c1810caa6b910b562a6" exitCode=0 Feb 24 15:50:39 crc kubenswrapper[4982]: I0224 15:50:39.521109 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"34ed4eece1be40eaaa7c3e64e060c17a554c39745fde6c1810caa6b910b562a6"} Feb 24 15:50:39 crc kubenswrapper[4982]: I0224 15:50:39.521548 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62"} Feb 24 15:50:39 crc kubenswrapper[4982]: I0224 15:50:39.521569 4982 scope.go:117] "RemoveContainer" containerID="31cf20ff4a31f48eef545c9445da0e2e94895119134560eb5e36234057b9e0a9" Feb 24 15:50:49 crc kubenswrapper[4982]: I0224 15:50:49.391946 4982 scope.go:117] "RemoveContainer" containerID="5093d2b5d4c71411e6568ec6212396b081edac9b4fb4be3dd6367d01f7e47b66" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.115959 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-98ftj"] Feb 24 15:51:59 crc kubenswrapper[4982]: E0224 15:51:59.116887 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257d2276-89bf-47f7-b7f7-a9bfd88872a2" containerName="oc" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.116900 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="257d2276-89bf-47f7-b7f7-a9bfd88872a2" containerName="oc" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.117160 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="257d2276-89bf-47f7-b7f7-a9bfd88872a2" containerName="oc" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.118732 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.139317 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98ftj"] Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.202583 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-catalog-content\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.203105 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svxfr\" (UniqueName: \"kubernetes.io/projected/25cb8742-4c0f-4414-944e-c65d4f1895f5-kube-api-access-svxfr\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.203234 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-utilities\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.308579 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-catalog-content\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.308810 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svxfr\" (UniqueName: \"kubernetes.io/projected/25cb8742-4c0f-4414-944e-c65d4f1895f5-kube-api-access-svxfr\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.308887 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-utilities\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.309717 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-catalog-content\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.309765 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-utilities\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.330418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svxfr\" (UniqueName: \"kubernetes.io/projected/25cb8742-4c0f-4414-944e-c65d4f1895f5-kube-api-access-svxfr\") pod \"certified-operators-98ftj\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:51:59 crc kubenswrapper[4982]: I0224 15:51:59.469452 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.157778 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532472-sbj2h"] Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.159868 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.163577 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.164329 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.165072 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.183271 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532472-sbj2h"] Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.209293 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98ftj"] Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.244449 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4kh\" (UniqueName: \"kubernetes.io/projected/bb003bfc-c3b5-4200-afb4-6f1a513fedee-kube-api-access-kq4kh\") pod \"auto-csr-approver-29532472-sbj2h\" (UID: \"bb003bfc-c3b5-4200-afb4-6f1a513fedee\") " pod="openshift-infra/auto-csr-approver-29532472-sbj2h" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.346600 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4kh\" (UniqueName: \"kubernetes.io/projected/bb003bfc-c3b5-4200-afb4-6f1a513fedee-kube-api-access-kq4kh\") pod \"auto-csr-approver-29532472-sbj2h\" (UID: \"bb003bfc-c3b5-4200-afb4-6f1a513fedee\") " pod="openshift-infra/auto-csr-approver-29532472-sbj2h" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.366310 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4kh\" (UniqueName: \"kubernetes.io/projected/bb003bfc-c3b5-4200-afb4-6f1a513fedee-kube-api-access-kq4kh\") pod \"auto-csr-approver-29532472-sbj2h\" (UID: \"bb003bfc-c3b5-4200-afb4-6f1a513fedee\") " pod="openshift-infra/auto-csr-approver-29532472-sbj2h" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.553379 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.570196 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98ftj" event={"ID":"25cb8742-4c0f-4414-944e-c65d4f1895f5","Type":"ContainerStarted","Data":"ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd"} Feb 24 15:52:00 crc kubenswrapper[4982]: I0224 15:52:00.570238 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98ftj" event={"ID":"25cb8742-4c0f-4414-944e-c65d4f1895f5","Type":"ContainerStarted","Data":"b23639ab9c5951d78a2e6a2021dee79c2371cfb0656d14e9aafa9fd875949069"} Feb 24 15:52:01 crc kubenswrapper[4982]: I0224 15:52:01.090202 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532472-sbj2h"] Feb 24 15:52:01 crc kubenswrapper[4982]: W0224 15:52:01.094544 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb003bfc_c3b5_4200_afb4_6f1a513fedee.slice/crio-26e1c608f32da3f300a23eb1ac064ba2cb9b937c0ba1d1a93729ac2483255bea WatchSource:0}: Error finding container 26e1c608f32da3f300a23eb1ac064ba2cb9b937c0ba1d1a93729ac2483255bea: Status 404 returned error can't find the container with id 26e1c608f32da3f300a23eb1ac064ba2cb9b937c0ba1d1a93729ac2483255bea Feb 24 15:52:01 crc kubenswrapper[4982]: I0224 15:52:01.582999 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" event={"ID":"bb003bfc-c3b5-4200-afb4-6f1a513fedee","Type":"ContainerStarted","Data":"26e1c608f32da3f300a23eb1ac064ba2cb9b937c0ba1d1a93729ac2483255bea"} Feb 24 15:52:01 crc kubenswrapper[4982]: I0224 15:52:01.585349 4982 generic.go:334] "Generic (PLEG): container finished" podID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerID="ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd" exitCode=0 Feb 24 15:52:01 crc kubenswrapper[4982]: I0224 15:52:01.585394 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98ftj" event={"ID":"25cb8742-4c0f-4414-944e-c65d4f1895f5","Type":"ContainerDied","Data":"ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd"} Feb 24 15:52:02 crc kubenswrapper[4982]: I0224 15:52:02.600918 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" event={"ID":"bb003bfc-c3b5-4200-afb4-6f1a513fedee","Type":"ContainerStarted","Data":"dd93c6de294749bfaf4e5f5f6ab3a5592ff0bb9b41436fab16dbbd17ad84ebaa"} Feb 24 15:52:02 crc kubenswrapper[4982]: I0224 15:52:02.604488 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98ftj" event={"ID":"25cb8742-4c0f-4414-944e-c65d4f1895f5","Type":"ContainerStarted","Data":"693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf"} Feb 24 15:52:02 crc kubenswrapper[4982]: I0224 15:52:02.629087 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" podStartSLOduration=1.597719112 podStartE2EDuration="2.629057335s" podCreationTimestamp="2026-02-24 15:52:00 +0000 UTC" firstStartedPulling="2026-02-24 15:52:01.097552526 +0000 UTC m=+3782.716611019" lastFinishedPulling="2026-02-24 15:52:02.128890749 +0000 UTC m=+3783.747949242" observedRunningTime="2026-02-24 15:52:02.615611068 +0000 UTC m=+3784.234669561" watchObservedRunningTime="2026-02-24 15:52:02.629057335 +0000 UTC m=+3784.248115838" Feb 24 15:52:03 crc kubenswrapper[4982]: I0224 15:52:03.634919 4982 generic.go:334] "Generic (PLEG): container finished" podID="bb003bfc-c3b5-4200-afb4-6f1a513fedee" containerID="dd93c6de294749bfaf4e5f5f6ab3a5592ff0bb9b41436fab16dbbd17ad84ebaa" exitCode=0 Feb 24 15:52:03 crc kubenswrapper[4982]: I0224 15:52:03.635034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" event={"ID":"bb003bfc-c3b5-4200-afb4-6f1a513fedee","Type":"ContainerDied","Data":"dd93c6de294749bfaf4e5f5f6ab3a5592ff0bb9b41436fab16dbbd17ad84ebaa"} Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.067102 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.181223 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq4kh\" (UniqueName: \"kubernetes.io/projected/bb003bfc-c3b5-4200-afb4-6f1a513fedee-kube-api-access-kq4kh\") pod \"bb003bfc-c3b5-4200-afb4-6f1a513fedee\" (UID: \"bb003bfc-c3b5-4200-afb4-6f1a513fedee\") " Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.194749 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb003bfc-c3b5-4200-afb4-6f1a513fedee-kube-api-access-kq4kh" (OuterVolumeSpecName: "kube-api-access-kq4kh") pod "bb003bfc-c3b5-4200-afb4-6f1a513fedee" (UID: "bb003bfc-c3b5-4200-afb4-6f1a513fedee"). InnerVolumeSpecName "kube-api-access-kq4kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.288962 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq4kh\" (UniqueName: \"kubernetes.io/projected/bb003bfc-c3b5-4200-afb4-6f1a513fedee-kube-api-access-kq4kh\") on node \"crc\" DevicePath \"\"" Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.665900 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" event={"ID":"bb003bfc-c3b5-4200-afb4-6f1a513fedee","Type":"ContainerDied","Data":"26e1c608f32da3f300a23eb1ac064ba2cb9b937c0ba1d1a93729ac2483255bea"} Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.665932 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532472-sbj2h" Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.665942 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e1c608f32da3f300a23eb1ac064ba2cb9b937c0ba1d1a93729ac2483255bea" Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.668613 4982 generic.go:334] "Generic (PLEG): container finished" podID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerID="693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf" exitCode=0 Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.668657 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98ftj" event={"ID":"25cb8742-4c0f-4414-944e-c65d4f1895f5","Type":"ContainerDied","Data":"693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf"} Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.732220 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532466-8lxf9"] Feb 24 15:52:05 crc kubenswrapper[4982]: I0224 15:52:05.745353 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532466-8lxf9"] Feb 24 15:52:06 crc kubenswrapper[4982]: I0224 15:52:06.690569 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98ftj" event={"ID":"25cb8742-4c0f-4414-944e-c65d4f1895f5","Type":"ContainerStarted","Data":"2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243"} Feb 24 15:52:07 crc kubenswrapper[4982]: I0224 15:52:07.161024 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e50a491-45fb-49bc-8a5c-fc2d52edd85f" path="/var/lib/kubelet/pods/3e50a491-45fb-49bc-8a5c-fc2d52edd85f/volumes" Feb 24 15:52:09 crc kubenswrapper[4982]: I0224 15:52:09.470441 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:52:09 crc kubenswrapper[4982]: I0224 15:52:09.471076 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:52:09 crc kubenswrapper[4982]: I0224 15:52:09.624456 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:52:09 crc kubenswrapper[4982]: I0224 15:52:09.650323 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-98ftj" podStartSLOduration=6.1090010790000004 podStartE2EDuration="10.650308096s" podCreationTimestamp="2026-02-24 15:51:59 +0000 UTC" firstStartedPulling="2026-02-24 15:52:01.587844111 +0000 UTC m=+3783.206902604" lastFinishedPulling="2026-02-24 15:52:06.129151128 +0000 UTC m=+3787.748209621" observedRunningTime="2026-02-24 15:52:06.720312973 +0000 UTC m=+3788.339371496" watchObservedRunningTime="2026-02-24 15:52:09.650308096 +0000 UTC m=+3791.269366579" Feb 24 15:52:19 crc kubenswrapper[4982]: I0224 15:52:19.531928 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:52:19 crc kubenswrapper[4982]: I0224 15:52:19.581925 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-98ftj"] Feb 24 15:52:19 crc kubenswrapper[4982]: I0224 15:52:19.836014 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-98ftj" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerName="registry-server" containerID="cri-o://2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243" gracePeriod=2 Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.381836 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.493416 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-utilities\") pod \"25cb8742-4c0f-4414-944e-c65d4f1895f5\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.493612 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svxfr\" (UniqueName: \"kubernetes.io/projected/25cb8742-4c0f-4414-944e-c65d4f1895f5-kube-api-access-svxfr\") pod \"25cb8742-4c0f-4414-944e-c65d4f1895f5\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.493679 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-catalog-content\") pod \"25cb8742-4c0f-4414-944e-c65d4f1895f5\" (UID: \"25cb8742-4c0f-4414-944e-c65d4f1895f5\") " Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.494714 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-utilities" (OuterVolumeSpecName: "utilities") pod "25cb8742-4c0f-4414-944e-c65d4f1895f5" (UID: "25cb8742-4c0f-4414-944e-c65d4f1895f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.500580 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25cb8742-4c0f-4414-944e-c65d4f1895f5-kube-api-access-svxfr" (OuterVolumeSpecName: "kube-api-access-svxfr") pod "25cb8742-4c0f-4414-944e-c65d4f1895f5" (UID: "25cb8742-4c0f-4414-944e-c65d4f1895f5"). InnerVolumeSpecName "kube-api-access-svxfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.553425 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25cb8742-4c0f-4414-944e-c65d4f1895f5" (UID: "25cb8742-4c0f-4414-944e-c65d4f1895f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.597619 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.598033 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svxfr\" (UniqueName: \"kubernetes.io/projected/25cb8742-4c0f-4414-944e-c65d4f1895f5-kube-api-access-svxfr\") on node \"crc\" DevicePath \"\"" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.598054 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cb8742-4c0f-4414-944e-c65d4f1895f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.851912 4982 generic.go:334] "Generic (PLEG): container finished" podID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerID="2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243" exitCode=0 Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.851968 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98ftj" event={"ID":"25cb8742-4c0f-4414-944e-c65d4f1895f5","Type":"ContainerDied","Data":"2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243"} Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.851983 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98ftj" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.852011 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98ftj" event={"ID":"25cb8742-4c0f-4414-944e-c65d4f1895f5","Type":"ContainerDied","Data":"b23639ab9c5951d78a2e6a2021dee79c2371cfb0656d14e9aafa9fd875949069"} Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.852047 4982 scope.go:117] "RemoveContainer" containerID="2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.877833 4982 scope.go:117] "RemoveContainer" containerID="693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf" Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.900927 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-98ftj"] Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.912261 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-98ftj"] Feb 24 15:52:20 crc kubenswrapper[4982]: I0224 15:52:20.919699 4982 scope.go:117] "RemoveContainer" containerID="ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd" Feb 24 15:52:21 crc kubenswrapper[4982]: I0224 15:52:20.999982 4982 scope.go:117] "RemoveContainer" containerID="2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243" Feb 24 15:52:21 crc kubenswrapper[4982]: E0224 15:52:21.000426 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243\": container with ID starting with 2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243 not found: ID does not exist" containerID="2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243" Feb 24 15:52:21 crc kubenswrapper[4982]: I0224 15:52:21.000453 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243"} err="failed to get container status \"2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243\": rpc error: code = NotFound desc = could not find container \"2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243\": container with ID starting with 2f5ed6a6087bdd0d34aa677d4c5b9b40a42db85a6c37be80e35543489475b243 not found: ID does not exist" Feb 24 15:52:21 crc kubenswrapper[4982]: I0224 15:52:21.000473 4982 scope.go:117] "RemoveContainer" containerID="693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf" Feb 24 15:52:21 crc kubenswrapper[4982]: E0224 15:52:21.001025 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf\": container with ID starting with 693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf not found: ID does not exist" containerID="693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf" Feb 24 15:52:21 crc kubenswrapper[4982]: I0224 15:52:21.001056 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf"} err="failed to get container status \"693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf\": rpc error: code = NotFound desc = could not find container \"693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf\": container with ID starting with 693274c02d0575efba175ae284de93fd00a58a5a661ca0a0a1d41a474c23b8bf not found: ID does not exist" Feb 24 15:52:21 crc kubenswrapper[4982]: I0224 15:52:21.001073 4982 scope.go:117] "RemoveContainer" containerID="ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd" Feb 24 15:52:21 crc kubenswrapper[4982]: E0224 15:52:21.001412 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd\": container with ID starting with ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd not found: ID does not exist" containerID="ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd" Feb 24 15:52:21 crc kubenswrapper[4982]: I0224 15:52:21.001472 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd"} err="failed to get container status \"ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd\": rpc error: code = NotFound desc = could not find container \"ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd\": container with ID starting with ce6b911dac4b934ade49ecc82265f38bb1d6f9141d935af64430491ab12db7bd not found: ID does not exist" Feb 24 15:52:21 crc kubenswrapper[4982]: I0224 15:52:21.159049 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" path="/var/lib/kubelet/pods/25cb8742-4c0f-4414-944e-c65d4f1895f5/volumes" Feb 24 15:52:49 crc kubenswrapper[4982]: I0224 15:52:49.543178 4982 scope.go:117] "RemoveContainer" containerID="ec076ec2a4737dd0e7f83da43ff592162241cb63b425f3f4bef8f039fde6dde1" Feb 24 15:53:08 crc kubenswrapper[4982]: I0224 15:53:08.749815 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:53:08 crc kubenswrapper[4982]: I0224 15:53:08.750245 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:53:38 crc kubenswrapper[4982]: I0224 15:53:38.737870 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:53:38 crc kubenswrapper[4982]: I0224 15:53:38.738315 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.146537 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532474-z688f"] Feb 24 15:54:00 crc kubenswrapper[4982]: E0224 15:54:00.147672 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerName="extract-utilities" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.147689 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerName="extract-utilities" Feb 24 15:54:00 crc kubenswrapper[4982]: E0224 15:54:00.147712 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerName="extract-content" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.147719 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerName="extract-content" Feb 24 15:54:00 crc kubenswrapper[4982]: E0224 15:54:00.147728 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerName="registry-server" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.147734 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerName="registry-server" Feb 24 15:54:00 crc kubenswrapper[4982]: E0224 15:54:00.147773 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb003bfc-c3b5-4200-afb4-6f1a513fedee" containerName="oc" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.147778 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb003bfc-c3b5-4200-afb4-6f1a513fedee" containerName="oc" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.148011 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="25cb8742-4c0f-4414-944e-c65d4f1895f5" containerName="registry-server" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.148045 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb003bfc-c3b5-4200-afb4-6f1a513fedee" containerName="oc" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.148855 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532474-z688f" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.151447 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.151661 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.152774 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.157790 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532474-z688f"] Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.225907 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthpd\" (UniqueName: \"kubernetes.io/projected/87cf171e-d79a-4311-a224-14e17384565c-kube-api-access-qthpd\") pod \"auto-csr-approver-29532474-z688f\" (UID: \"87cf171e-d79a-4311-a224-14e17384565c\") " pod="openshift-infra/auto-csr-approver-29532474-z688f" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.328465 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthpd\" (UniqueName: \"kubernetes.io/projected/87cf171e-d79a-4311-a224-14e17384565c-kube-api-access-qthpd\") pod \"auto-csr-approver-29532474-z688f\" (UID: \"87cf171e-d79a-4311-a224-14e17384565c\") " pod="openshift-infra/auto-csr-approver-29532474-z688f" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.349320 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthpd\" (UniqueName: \"kubernetes.io/projected/87cf171e-d79a-4311-a224-14e17384565c-kube-api-access-qthpd\") pod \"auto-csr-approver-29532474-z688f\" (UID: \"87cf171e-d79a-4311-a224-14e17384565c\") " pod="openshift-infra/auto-csr-approver-29532474-z688f" Feb 24 15:54:00 crc kubenswrapper[4982]: I0224 15:54:00.467449 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532474-z688f" Feb 24 15:54:01 crc kubenswrapper[4982]: I0224 15:54:01.167147 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532474-z688f"] Feb 24 15:54:01 crc kubenswrapper[4982]: I0224 15:54:01.181987 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 15:54:02 crc kubenswrapper[4982]: I0224 15:54:02.101222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532474-z688f" event={"ID":"87cf171e-d79a-4311-a224-14e17384565c","Type":"ContainerStarted","Data":"686d6a554d562daebcaa7d4d76bc956e12aca5beaa290e3f72520750e0c3b0fa"} Feb 24 15:54:03 crc kubenswrapper[4982]: I0224 15:54:03.130961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532474-z688f" event={"ID":"87cf171e-d79a-4311-a224-14e17384565c","Type":"ContainerStarted","Data":"f74ae4766ded6c82913e3b0ec76971925b598ebc17ef701f9ff199996ff997ba"} Feb 24 15:54:03 crc kubenswrapper[4982]: I0224 15:54:03.151292 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532474-z688f" podStartSLOduration=1.732841171 podStartE2EDuration="3.151269249s" podCreationTimestamp="2026-02-24 15:54:00 +0000 UTC" firstStartedPulling="2026-02-24 15:54:01.181618015 +0000 UTC m=+3902.800676528" lastFinishedPulling="2026-02-24 15:54:02.600046113 +0000 UTC m=+3904.219104606" observedRunningTime="2026-02-24 15:54:03.145911693 +0000 UTC m=+3904.764970186" watchObservedRunningTime="2026-02-24 15:54:03.151269249 +0000 UTC m=+3904.770327742" Feb 24 15:54:04 crc kubenswrapper[4982]: I0224 15:54:04.144965 4982 generic.go:334] "Generic (PLEG): container finished" podID="87cf171e-d79a-4311-a224-14e17384565c" containerID="f74ae4766ded6c82913e3b0ec76971925b598ebc17ef701f9ff199996ff997ba" exitCode=0 Feb 24 15:54:04 crc kubenswrapper[4982]: I0224 15:54:04.145009 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532474-z688f" event={"ID":"87cf171e-d79a-4311-a224-14e17384565c","Type":"ContainerDied","Data":"f74ae4766ded6c82913e3b0ec76971925b598ebc17ef701f9ff199996ff997ba"} Feb 24 15:54:05 crc kubenswrapper[4982]: I0224 15:54:05.572476 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532474-z688f" Feb 24 15:54:05 crc kubenswrapper[4982]: I0224 15:54:05.655248 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qthpd\" (UniqueName: \"kubernetes.io/projected/87cf171e-d79a-4311-a224-14e17384565c-kube-api-access-qthpd\") pod \"87cf171e-d79a-4311-a224-14e17384565c\" (UID: \"87cf171e-d79a-4311-a224-14e17384565c\") " Feb 24 15:54:05 crc kubenswrapper[4982]: I0224 15:54:05.672074 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf171e-d79a-4311-a224-14e17384565c-kube-api-access-qthpd" (OuterVolumeSpecName: "kube-api-access-qthpd") pod "87cf171e-d79a-4311-a224-14e17384565c" (UID: "87cf171e-d79a-4311-a224-14e17384565c"). InnerVolumeSpecName "kube-api-access-qthpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:54:05 crc kubenswrapper[4982]: I0224 15:54:05.758357 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qthpd\" (UniqueName: \"kubernetes.io/projected/87cf171e-d79a-4311-a224-14e17384565c-kube-api-access-qthpd\") on node \"crc\" DevicePath \"\"" Feb 24 15:54:06 crc kubenswrapper[4982]: I0224 15:54:06.174451 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532474-z688f" event={"ID":"87cf171e-d79a-4311-a224-14e17384565c","Type":"ContainerDied","Data":"686d6a554d562daebcaa7d4d76bc956e12aca5beaa290e3f72520750e0c3b0fa"} Feb 24 15:54:06 crc kubenswrapper[4982]: I0224 15:54:06.174931 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="686d6a554d562daebcaa7d4d76bc956e12aca5beaa290e3f72520750e0c3b0fa" Feb 24 15:54:06 crc kubenswrapper[4982]: I0224 15:54:06.174531 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532474-z688f" Feb 24 15:54:06 crc kubenswrapper[4982]: I0224 15:54:06.228215 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532468-cqmlt"] Feb 24 15:54:06 crc kubenswrapper[4982]: I0224 15:54:06.237836 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532468-cqmlt"] Feb 24 15:54:07 crc kubenswrapper[4982]: I0224 15:54:07.161896 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3384fe6e-2102-4e04-b4ce-f7d6c4377f9e" path="/var/lib/kubelet/pods/3384fe6e-2102-4e04-b4ce-f7d6c4377f9e/volumes" Feb 24 15:54:08 crc kubenswrapper[4982]: I0224 15:54:08.737964 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 15:54:08 crc kubenswrapper[4982]: I0224 15:54:08.738363 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 15:54:08 crc kubenswrapper[4982]: I0224 15:54:08.738438 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 15:54:08 crc kubenswrapper[4982]: I0224 15:54:08.739524 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 15:54:08 crc kubenswrapper[4982]: I0224 15:54:08.739607 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" gracePeriod=600 Feb 24 15:54:08 crc kubenswrapper[4982]: E0224 15:54:08.862713 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:54:09 crc kubenswrapper[4982]: I0224 15:54:09.212490 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" exitCode=0 Feb 24 15:54:09 crc kubenswrapper[4982]: I0224 15:54:09.212546 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62"} Feb 24 15:54:09 crc kubenswrapper[4982]: I0224 15:54:09.212578 4982 scope.go:117] "RemoveContainer" containerID="34ed4eece1be40eaaa7c3e64e060c17a554c39745fde6c1810caa6b910b562a6" Feb 24 15:54:09 crc kubenswrapper[4982]: I0224 15:54:09.213927 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:54:09 crc kubenswrapper[4982]: E0224 15:54:09.214863 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:54:23 crc kubenswrapper[4982]: I0224 15:54:23.146323 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:54:23 crc kubenswrapper[4982]: E0224 15:54:23.147923 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:54:38 crc kubenswrapper[4982]: I0224 15:54:38.146631 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:54:38 crc kubenswrapper[4982]: E0224 15:54:38.147403 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:54:49 crc kubenswrapper[4982]: I0224 15:54:49.697147 4982 scope.go:117] "RemoveContainer" containerID="5f5bcec43239388151a2f293b818c03e97af6f223f6dc686cc6f1d87feb7c115" Feb 24 15:54:50 crc kubenswrapper[4982]: I0224 15:54:50.146535 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:54:50 crc kubenswrapper[4982]: E0224 15:54:50.147102 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:55:02 crc kubenswrapper[4982]: I0224 15:55:02.146379 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:55:02 crc kubenswrapper[4982]: E0224 15:55:02.147494 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:55:14 crc kubenswrapper[4982]: I0224 15:55:14.146387 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:55:14 crc kubenswrapper[4982]: E0224 15:55:14.147782 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:55:25 crc kubenswrapper[4982]: I0224 15:55:25.146094 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:55:25 crc kubenswrapper[4982]: E0224 15:55:25.148046 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:55:36 crc kubenswrapper[4982]: I0224 15:55:36.146104 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:55:36 crc kubenswrapper[4982]: E0224 15:55:36.146796 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:55:47 crc kubenswrapper[4982]: I0224 15:55:47.146457 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:55:47 crc kubenswrapper[4982]: E0224 15:55:47.148546 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:55:56 crc kubenswrapper[4982]: I0224 15:55:56.955729 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rnfz2"] Feb 24 15:55:56 crc kubenswrapper[4982]: E0224 15:55:56.957189 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87cf171e-d79a-4311-a224-14e17384565c" containerName="oc" Feb 24 15:55:56 crc kubenswrapper[4982]: I0224 15:55:56.957209 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="87cf171e-d79a-4311-a224-14e17384565c" containerName="oc" Feb 24 15:55:56 crc kubenswrapper[4982]: I0224 15:55:56.962241 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="87cf171e-d79a-4311-a224-14e17384565c" containerName="oc" Feb 24 15:55:56 crc kubenswrapper[4982]: I0224 15:55:56.970186 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.002408 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnfz2"] Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.037758 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjz8s\" (UniqueName: \"kubernetes.io/projected/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-kube-api-access-sjz8s\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.038047 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-utilities\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.038572 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-catalog-content\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.139867 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-utilities\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.140091 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-catalog-content\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.140149 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjz8s\" (UniqueName: \"kubernetes.io/projected/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-kube-api-access-sjz8s\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.140655 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-utilities\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.140730 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-catalog-content\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.164993 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjz8s\" (UniqueName: \"kubernetes.io/projected/2ac0cb26-a32d-4377-afe7-33e056fd5f4d-kube-api-access-sjz8s\") pod \"redhat-operators-rnfz2\" (UID: \"2ac0cb26-a32d-4377-afe7-33e056fd5f4d\") " pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.323912 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:55:57 crc kubenswrapper[4982]: I0224 15:55:57.825813 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnfz2"] Feb 24 15:55:58 crc kubenswrapper[4982]: I0224 15:55:58.145869 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:55:58 crc kubenswrapper[4982]: E0224 15:55:58.146487 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:55:58 crc kubenswrapper[4982]: I0224 15:55:58.494449 4982 generic.go:334] "Generic (PLEG): container finished" podID="2ac0cb26-a32d-4377-afe7-33e056fd5f4d" containerID="3e6a96128159d4f4146e38495ea4db05b3639fc1aa056da3dc4b141b0f478703" exitCode=0 Feb 24 15:55:58 crc kubenswrapper[4982]: I0224 15:55:58.494491 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnfz2" event={"ID":"2ac0cb26-a32d-4377-afe7-33e056fd5f4d","Type":"ContainerDied","Data":"3e6a96128159d4f4146e38495ea4db05b3639fc1aa056da3dc4b141b0f478703"} Feb 24 15:55:58 crc kubenswrapper[4982]: I0224 15:55:58.494536 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnfz2" event={"ID":"2ac0cb26-a32d-4377-afe7-33e056fd5f4d","Type":"ContainerStarted","Data":"9e3f7ccd6caba43225c89c0269554ee8c182acc11b4bdf1a0a2f2896a0d16851"} Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.185722 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s92g9"] Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.190547 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.208145 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s92g9"] Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.294930 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-utilities\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.295002 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-catalog-content\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.295102 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk92p\" (UniqueName: \"kubernetes.io/projected/336678f2-e2a8-4f82-a275-bbbc3b3c574e-kube-api-access-nk92p\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.396750 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-utilities\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.396835 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-catalog-content\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.396942 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk92p\" (UniqueName: \"kubernetes.io/projected/336678f2-e2a8-4f82-a275-bbbc3b3c574e-kube-api-access-nk92p\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.397709 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-utilities\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.397802 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-catalog-content\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.416022 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk92p\" (UniqueName: \"kubernetes.io/projected/336678f2-e2a8-4f82-a275-bbbc3b3c574e-kube-api-access-nk92p\") pod \"redhat-marketplace-s92g9\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:55:59 crc kubenswrapper[4982]: I0224 15:55:59.532005 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.159250 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532476-hhpfz"] Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.161420 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.163374 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.163861 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.164967 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.174734 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532476-hhpfz"] Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.215195 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llhvl\" (UniqueName: \"kubernetes.io/projected/715906a3-3aa8-4ea0-82b2-f4d8fe085a04-kube-api-access-llhvl\") pod \"auto-csr-approver-29532476-hhpfz\" (UID: \"715906a3-3aa8-4ea0-82b2-f4d8fe085a04\") " pod="openshift-infra/auto-csr-approver-29532476-hhpfz" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.220236 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s92g9"] Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.318077 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llhvl\" (UniqueName: \"kubernetes.io/projected/715906a3-3aa8-4ea0-82b2-f4d8fe085a04-kube-api-access-llhvl\") pod \"auto-csr-approver-29532476-hhpfz\" (UID: \"715906a3-3aa8-4ea0-82b2-f4d8fe085a04\") " pod="openshift-infra/auto-csr-approver-29532476-hhpfz" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.339192 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llhvl\" (UniqueName: \"kubernetes.io/projected/715906a3-3aa8-4ea0-82b2-f4d8fe085a04-kube-api-access-llhvl\") pod \"auto-csr-approver-29532476-hhpfz\" (UID: \"715906a3-3aa8-4ea0-82b2-f4d8fe085a04\") " pod="openshift-infra/auto-csr-approver-29532476-hhpfz" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.485625 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.519915 4982 generic.go:334] "Generic (PLEG): container finished" podID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerID="31d49f0efdc0b0238af9bb4d281dc5901d0f51990e633725669ed74f46f0a33c" exitCode=0 Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.519975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s92g9" event={"ID":"336678f2-e2a8-4f82-a275-bbbc3b3c574e","Type":"ContainerDied","Data":"31d49f0efdc0b0238af9bb4d281dc5901d0f51990e633725669ed74f46f0a33c"} Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.520024 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s92g9" event={"ID":"336678f2-e2a8-4f82-a275-bbbc3b3c574e","Type":"ContainerStarted","Data":"fc8f6da41bdd068ede3b08feecfa32447a2bc222981c8a8caa7ed72535a532e1"} Feb 24 15:56:00 crc kubenswrapper[4982]: I0224 15:56:00.965128 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532476-hhpfz"] Feb 24 15:56:01 crc kubenswrapper[4982]: I0224 15:56:01.535115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" event={"ID":"715906a3-3aa8-4ea0-82b2-f4d8fe085a04","Type":"ContainerStarted","Data":"38164eb3a2349e862f83010c0c5ce26d8c9ed9bcd04744ebe9c2388a50c566ab"} Feb 24 15:56:01 crc kubenswrapper[4982]: I0224 15:56:01.538732 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s92g9" event={"ID":"336678f2-e2a8-4f82-a275-bbbc3b3c574e","Type":"ContainerStarted","Data":"58cd6b1d328e665d34604c062e700d76c40d7c3d913496dac7df0a6b2e0c63c9"} Feb 24 15:56:02 crc kubenswrapper[4982]: I0224 15:56:02.552624 4982 generic.go:334] "Generic (PLEG): container finished" podID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerID="58cd6b1d328e665d34604c062e700d76c40d7c3d913496dac7df0a6b2e0c63c9" exitCode=0 Feb 24 15:56:02 crc kubenswrapper[4982]: I0224 15:56:02.552711 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s92g9" event={"ID":"336678f2-e2a8-4f82-a275-bbbc3b3c574e","Type":"ContainerDied","Data":"58cd6b1d328e665d34604c062e700d76c40d7c3d913496dac7df0a6b2e0c63c9"} Feb 24 15:56:02 crc kubenswrapper[4982]: I0224 15:56:02.557446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" event={"ID":"715906a3-3aa8-4ea0-82b2-f4d8fe085a04","Type":"ContainerStarted","Data":"add0f34626b3df4c7d0163b444889982117d99a9b328e5d6b76d7f6b72ca4927"} Feb 24 15:56:02 crc kubenswrapper[4982]: I0224 15:56:02.603308 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" podStartSLOduration=1.549190237 podStartE2EDuration="2.603287098s" podCreationTimestamp="2026-02-24 15:56:00 +0000 UTC" firstStartedPulling="2026-02-24 15:56:00.974927136 +0000 UTC m=+4022.593985629" lastFinishedPulling="2026-02-24 15:56:02.029023997 +0000 UTC m=+4023.648082490" observedRunningTime="2026-02-24 15:56:02.593864461 +0000 UTC m=+4024.212922954" watchObservedRunningTime="2026-02-24 15:56:02.603287098 +0000 UTC m=+4024.222345591" Feb 24 15:56:03 crc kubenswrapper[4982]: I0224 15:56:03.572260 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s92g9" event={"ID":"336678f2-e2a8-4f82-a275-bbbc3b3c574e","Type":"ContainerStarted","Data":"d55a1fca4b3dd2dbe7785e335ca2bfce3fd2115f620503fa9e260dbb2aac28c7"} Feb 24 15:56:03 crc kubenswrapper[4982]: I0224 15:56:03.575030 4982 generic.go:334] "Generic (PLEG): container finished" podID="715906a3-3aa8-4ea0-82b2-f4d8fe085a04" containerID="add0f34626b3df4c7d0163b444889982117d99a9b328e5d6b76d7f6b72ca4927" exitCode=0 Feb 24 15:56:03 crc kubenswrapper[4982]: I0224 15:56:03.575061 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" event={"ID":"715906a3-3aa8-4ea0-82b2-f4d8fe085a04","Type":"ContainerDied","Data":"add0f34626b3df4c7d0163b444889982117d99a9b328e5d6b76d7f6b72ca4927"} Feb 24 15:56:03 crc kubenswrapper[4982]: I0224 15:56:03.596229 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s92g9" podStartSLOduration=2.155348701 podStartE2EDuration="4.596211323s" podCreationTimestamp="2026-02-24 15:55:59 +0000 UTC" firstStartedPulling="2026-02-24 15:56:00.531712015 +0000 UTC m=+4022.150770508" lastFinishedPulling="2026-02-24 15:56:02.972574637 +0000 UTC m=+4024.591633130" observedRunningTime="2026-02-24 15:56:03.591573547 +0000 UTC m=+4025.210632050" watchObservedRunningTime="2026-02-24 15:56:03.596211323 +0000 UTC m=+4025.215269806" Feb 24 15:56:09 crc kubenswrapper[4982]: I0224 15:56:09.156946 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:56:09 crc kubenswrapper[4982]: E0224 15:56:09.158848 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:56:09 crc kubenswrapper[4982]: I0224 15:56:09.532190 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:56:09 crc kubenswrapper[4982]: I0224 15:56:09.532577 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:56:09 crc kubenswrapper[4982]: I0224 15:56:09.596447 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:56:09 crc kubenswrapper[4982]: I0224 15:56:09.717800 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:56:09 crc kubenswrapper[4982]: I0224 15:56:09.847402 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s92g9"] Feb 24 15:56:09 crc kubenswrapper[4982]: I0224 15:56:09.981869 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" Feb 24 15:56:10 crc kubenswrapper[4982]: I0224 15:56:10.003554 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llhvl\" (UniqueName: \"kubernetes.io/projected/715906a3-3aa8-4ea0-82b2-f4d8fe085a04-kube-api-access-llhvl\") pod \"715906a3-3aa8-4ea0-82b2-f4d8fe085a04\" (UID: \"715906a3-3aa8-4ea0-82b2-f4d8fe085a04\") " Feb 24 15:56:10 crc kubenswrapper[4982]: I0224 15:56:10.010670 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715906a3-3aa8-4ea0-82b2-f4d8fe085a04-kube-api-access-llhvl" (OuterVolumeSpecName: "kube-api-access-llhvl") pod "715906a3-3aa8-4ea0-82b2-f4d8fe085a04" (UID: "715906a3-3aa8-4ea0-82b2-f4d8fe085a04"). InnerVolumeSpecName "kube-api-access-llhvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:56:10 crc kubenswrapper[4982]: I0224 15:56:10.107424 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llhvl\" (UniqueName: \"kubernetes.io/projected/715906a3-3aa8-4ea0-82b2-f4d8fe085a04-kube-api-access-llhvl\") on node \"crc\" DevicePath \"\"" Feb 24 15:56:10 crc kubenswrapper[4982]: I0224 15:56:10.673791 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnfz2" event={"ID":"2ac0cb26-a32d-4377-afe7-33e056fd5f4d","Type":"ContainerStarted","Data":"2b4254a8810f86ab9d79ee5a75560e935d30d96578e45dd4afdef9335b7b493d"} Feb 24 15:56:10 crc kubenswrapper[4982]: I0224 15:56:10.675554 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" event={"ID":"715906a3-3aa8-4ea0-82b2-f4d8fe085a04","Type":"ContainerDied","Data":"38164eb3a2349e862f83010c0c5ce26d8c9ed9bcd04744ebe9c2388a50c566ab"} Feb 24 15:56:10 crc kubenswrapper[4982]: I0224 15:56:10.675591 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532476-hhpfz" Feb 24 15:56:10 crc kubenswrapper[4982]: I0224 15:56:10.675618 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38164eb3a2349e862f83010c0c5ce26d8c9ed9bcd04744ebe9c2388a50c566ab" Feb 24 15:56:11 crc kubenswrapper[4982]: I0224 15:56:11.069536 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532470-plz24"] Feb 24 15:56:11 crc kubenswrapper[4982]: I0224 15:56:11.080032 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532470-plz24"] Feb 24 15:56:11 crc kubenswrapper[4982]: I0224 15:56:11.158474 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257d2276-89bf-47f7-b7f7-a9bfd88872a2" path="/var/lib/kubelet/pods/257d2276-89bf-47f7-b7f7-a9bfd88872a2/volumes" Feb 24 15:56:11 crc kubenswrapper[4982]: E0224 15:56:11.363159 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ac0cb26_a32d_4377_afe7_33e056fd5f4d.slice/crio-2b4254a8810f86ab9d79ee5a75560e935d30d96578e45dd4afdef9335b7b493d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ac0cb26_a32d_4377_afe7_33e056fd5f4d.slice/crio-conmon-2b4254a8810f86ab9d79ee5a75560e935d30d96578e45dd4afdef9335b7b493d.scope\": RecentStats: unable to find data in memory cache]" Feb 24 15:56:11 crc kubenswrapper[4982]: I0224 15:56:11.688085 4982 generic.go:334] "Generic (PLEG): container finished" podID="2ac0cb26-a32d-4377-afe7-33e056fd5f4d" containerID="2b4254a8810f86ab9d79ee5a75560e935d30d96578e45dd4afdef9335b7b493d" exitCode=0 Feb 24 15:56:11 crc kubenswrapper[4982]: I0224 15:56:11.688284 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s92g9" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerName="registry-server" containerID="cri-o://d55a1fca4b3dd2dbe7785e335ca2bfce3fd2115f620503fa9e260dbb2aac28c7" gracePeriod=2 Feb 24 15:56:11 crc kubenswrapper[4982]: I0224 15:56:11.688867 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnfz2" event={"ID":"2ac0cb26-a32d-4377-afe7-33e056fd5f4d","Type":"ContainerDied","Data":"2b4254a8810f86ab9d79ee5a75560e935d30d96578e45dd4afdef9335b7b493d"} Feb 24 15:56:13 crc kubenswrapper[4982]: I0224 15:56:13.721315 4982 generic.go:334] "Generic (PLEG): container finished" podID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerID="d55a1fca4b3dd2dbe7785e335ca2bfce3fd2115f620503fa9e260dbb2aac28c7" exitCode=0 Feb 24 15:56:13 crc kubenswrapper[4982]: I0224 15:56:13.721582 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s92g9" event={"ID":"336678f2-e2a8-4f82-a275-bbbc3b3c574e","Type":"ContainerDied","Data":"d55a1fca4b3dd2dbe7785e335ca2bfce3fd2115f620503fa9e260dbb2aac28c7"} Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.260140 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.318075 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk92p\" (UniqueName: \"kubernetes.io/projected/336678f2-e2a8-4f82-a275-bbbc3b3c574e-kube-api-access-nk92p\") pod \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.318167 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-catalog-content\") pod \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.318423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-utilities\") pod \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\" (UID: \"336678f2-e2a8-4f82-a275-bbbc3b3c574e\") " Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.319391 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-utilities" (OuterVolumeSpecName: "utilities") pod "336678f2-e2a8-4f82-a275-bbbc3b3c574e" (UID: "336678f2-e2a8-4f82-a275-bbbc3b3c574e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.328962 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336678f2-e2a8-4f82-a275-bbbc3b3c574e-kube-api-access-nk92p" (OuterVolumeSpecName: "kube-api-access-nk92p") pod "336678f2-e2a8-4f82-a275-bbbc3b3c574e" (UID: "336678f2-e2a8-4f82-a275-bbbc3b3c574e"). InnerVolumeSpecName "kube-api-access-nk92p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.334336 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "336678f2-e2a8-4f82-a275-bbbc3b3c574e" (UID: "336678f2-e2a8-4f82-a275-bbbc3b3c574e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.420651 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.420719 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk92p\" (UniqueName: \"kubernetes.io/projected/336678f2-e2a8-4f82-a275-bbbc3b3c574e-kube-api-access-nk92p\") on node \"crc\" DevicePath \"\"" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.420736 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/336678f2-e2a8-4f82-a275-bbbc3b3c574e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.735440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rnfz2" event={"ID":"2ac0cb26-a32d-4377-afe7-33e056fd5f4d","Type":"ContainerStarted","Data":"f7fc1af483fee5b4868fed6e960d50db42cc82cbc15b14b034eafa4e692eb11b"} Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.737122 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s92g9" event={"ID":"336678f2-e2a8-4f82-a275-bbbc3b3c574e","Type":"ContainerDied","Data":"fc8f6da41bdd068ede3b08feecfa32447a2bc222981c8a8caa7ed72535a532e1"} Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.737173 4982 scope.go:117] "RemoveContainer" containerID="d55a1fca4b3dd2dbe7785e335ca2bfce3fd2115f620503fa9e260dbb2aac28c7" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.737172 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s92g9" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.780699 4982 scope.go:117] "RemoveContainer" containerID="58cd6b1d328e665d34604c062e700d76c40d7c3d913496dac7df0a6b2e0c63c9" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.789978 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rnfz2" podStartSLOduration=3.64745192 podStartE2EDuration="18.789959832s" podCreationTimestamp="2026-02-24 15:55:56 +0000 UTC" firstStartedPulling="2026-02-24 15:55:58.497307323 +0000 UTC m=+4020.116365816" lastFinishedPulling="2026-02-24 15:56:13.639815185 +0000 UTC m=+4035.258873728" observedRunningTime="2026-02-24 15:56:14.764796947 +0000 UTC m=+4036.383855460" watchObservedRunningTime="2026-02-24 15:56:14.789959832 +0000 UTC m=+4036.409018325" Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.798579 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s92g9"] Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.809356 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s92g9"] Feb 24 15:56:14 crc kubenswrapper[4982]: I0224 15:56:14.811703 4982 scope.go:117] "RemoveContainer" containerID="31d49f0efdc0b0238af9bb4d281dc5901d0f51990e633725669ed74f46f0a33c" Feb 24 15:56:15 crc kubenswrapper[4982]: I0224 15:56:15.162967 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" path="/var/lib/kubelet/pods/336678f2-e2a8-4f82-a275-bbbc3b3c574e/volumes" Feb 24 15:56:17 crc kubenswrapper[4982]: I0224 15:56:17.324951 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:56:17 crc kubenswrapper[4982]: I0224 15:56:17.325369 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:56:18 crc kubenswrapper[4982]: I0224 15:56:18.407144 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rnfz2" podUID="2ac0cb26-a32d-4377-afe7-33e056fd5f4d" containerName="registry-server" probeResult="failure" output=< Feb 24 15:56:18 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 15:56:18 crc kubenswrapper[4982]: > Feb 24 15:56:22 crc kubenswrapper[4982]: I0224 15:56:22.145563 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:56:22 crc kubenswrapper[4982]: E0224 15:56:22.146268 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:56:27 crc kubenswrapper[4982]: I0224 15:56:27.395315 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:56:27 crc kubenswrapper[4982]: I0224 15:56:27.481463 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rnfz2" Feb 24 15:56:28 crc kubenswrapper[4982]: I0224 15:56:28.170697 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rnfz2"] Feb 24 15:56:28 crc kubenswrapper[4982]: I0224 15:56:28.209327 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tw86r"] Feb 24 15:56:28 crc kubenswrapper[4982]: I0224 15:56:28.209561 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tw86r" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerName="registry-server" containerID="cri-o://23a6317975657e3133962b1f2469bf493f0e5e2f011997a6dcf6d0298145a527" gracePeriod=2 Feb 24 15:56:28 crc kubenswrapper[4982]: I0224 15:56:28.921193 4982 generic.go:334] "Generic (PLEG): container finished" podID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerID="23a6317975657e3133962b1f2469bf493f0e5e2f011997a6dcf6d0298145a527" exitCode=0 Feb 24 15:56:28 crc kubenswrapper[4982]: I0224 15:56:28.921274 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw86r" event={"ID":"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5","Type":"ContainerDied","Data":"23a6317975657e3133962b1f2469bf493f0e5e2f011997a6dcf6d0298145a527"} Feb 24 15:56:28 crc kubenswrapper[4982]: I0224 15:56:28.921966 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tw86r" event={"ID":"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5","Type":"ContainerDied","Data":"203226fc99592d59dbdf8d590e9291a369418a2136879982e2f607dc15616d0d"} Feb 24 15:56:28 crc kubenswrapper[4982]: I0224 15:56:28.921984 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203226fc99592d59dbdf8d590e9291a369418a2136879982e2f607dc15616d0d" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.008544 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.130035 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45qb\" (UniqueName: \"kubernetes.io/projected/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-kube-api-access-b45qb\") pod \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.130454 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-utilities\") pod \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.130702 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-catalog-content\") pod \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\" (UID: \"031f25c3-ccd3-4aa9-815e-9a61baa6ecf5\") " Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.130856 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-utilities" (OuterVolumeSpecName: "utilities") pod "031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" (UID: "031f25c3-ccd3-4aa9-815e-9a61baa6ecf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.132141 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.149718 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-kube-api-access-b45qb" (OuterVolumeSpecName: "kube-api-access-b45qb") pod "031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" (UID: "031f25c3-ccd3-4aa9-815e-9a61baa6ecf5"). InnerVolumeSpecName "kube-api-access-b45qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.257049 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45qb\" (UniqueName: \"kubernetes.io/projected/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-kube-api-access-b45qb\") on node \"crc\" DevicePath \"\"" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.617217 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" (UID: "031f25c3-ccd3-4aa9-815e-9a61baa6ecf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.666445 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.932267 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tw86r" Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.983772 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tw86r"] Feb 24 15:56:29 crc kubenswrapper[4982]: I0224 15:56:29.996573 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tw86r"] Feb 24 15:56:31 crc kubenswrapper[4982]: I0224 15:56:31.162610 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" path="/var/lib/kubelet/pods/031f25c3-ccd3-4aa9-815e-9a61baa6ecf5/volumes" Feb 24 15:56:34 crc kubenswrapper[4982]: I0224 15:56:34.146198 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:56:34 crc kubenswrapper[4982]: E0224 15:56:34.147103 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:56:45 crc kubenswrapper[4982]: I0224 15:56:45.146317 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:56:45 crc kubenswrapper[4982]: E0224 15:56:45.147476 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:56:49 crc kubenswrapper[4982]: I0224 15:56:49.836183 4982 scope.go:117] "RemoveContainer" containerID="695027bd7579a0eea7961da57b90a6c4454a31d8cdb0c2514a6823f28b576d1d" Feb 24 15:56:49 crc kubenswrapper[4982]: I0224 15:56:49.880737 4982 scope.go:117] "RemoveContainer" containerID="d998e46a3167b0c55198914e82fe3dec9985423062cbbfaa70c54289ad91711b" Feb 24 15:56:49 crc kubenswrapper[4982]: I0224 15:56:49.931680 4982 scope.go:117] "RemoveContainer" containerID="b684ff1544a5f5fbd8053fe6db4119cffee83cfc38f07ce988b5d783f869214d" Feb 24 15:56:50 crc kubenswrapper[4982]: I0224 15:56:50.022644 4982 scope.go:117] "RemoveContainer" containerID="23a6317975657e3133962b1f2469bf493f0e5e2f011997a6dcf6d0298145a527" Feb 24 15:56:57 crc kubenswrapper[4982]: I0224 15:56:57.147034 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:56:57 crc kubenswrapper[4982]: E0224 15:56:57.148122 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:57:09 crc kubenswrapper[4982]: I0224 15:57:09.155992 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:57:09 crc kubenswrapper[4982]: E0224 15:57:09.156990 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:57:24 crc kubenswrapper[4982]: I0224 15:57:24.146584 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:57:24 crc kubenswrapper[4982]: E0224 15:57:24.148935 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:57:35 crc kubenswrapper[4982]: I0224 15:57:35.148260 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:57:35 crc kubenswrapper[4982]: E0224 15:57:35.149941 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.522035 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xzgxj"] Feb 24 15:57:40 crc kubenswrapper[4982]: E0224 15:57:40.543288 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715906a3-3aa8-4ea0-82b2-f4d8fe085a04" containerName="oc" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.543326 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="715906a3-3aa8-4ea0-82b2-f4d8fe085a04" containerName="oc" Feb 24 15:57:40 crc kubenswrapper[4982]: E0224 15:57:40.543334 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerName="extract-utilities" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.543342 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerName="extract-utilities" Feb 24 15:57:40 crc kubenswrapper[4982]: E0224 15:57:40.543370 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerName="extract-utilities" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.543377 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerName="extract-utilities" Feb 24 15:57:40 crc kubenswrapper[4982]: E0224 15:57:40.543424 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerName="registry-server" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.543553 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerName="registry-server" Feb 24 15:57:40 crc kubenswrapper[4982]: E0224 15:57:40.543568 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerName="extract-content" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.543574 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerName="extract-content" Feb 24 15:57:40 crc kubenswrapper[4982]: E0224 15:57:40.543595 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerName="registry-server" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.543610 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerName="registry-server" Feb 24 15:57:40 crc kubenswrapper[4982]: E0224 15:57:40.543642 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerName="extract-content" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.543649 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerName="extract-content" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.552022 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="031f25c3-ccd3-4aa9-815e-9a61baa6ecf5" containerName="registry-server" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.552280 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="715906a3-3aa8-4ea0-82b2-f4d8fe085a04" containerName="oc" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.552312 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="336678f2-e2a8-4f82-a275-bbbc3b3c574e" containerName="registry-server" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.565681 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.599382 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xzgxj"] Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.718231 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-utilities\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.718593 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctr7\" (UniqueName: \"kubernetes.io/projected/5fbe16b0-0124-472a-bead-50b304a21f5a-kube-api-access-2ctr7\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.718711 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-catalog-content\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.821060 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-utilities\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.821162 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ctr7\" (UniqueName: \"kubernetes.io/projected/5fbe16b0-0124-472a-bead-50b304a21f5a-kube-api-access-2ctr7\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.821198 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-catalog-content\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.821743 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-utilities\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.821790 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-catalog-content\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.844482 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ctr7\" (UniqueName: \"kubernetes.io/projected/5fbe16b0-0124-472a-bead-50b304a21f5a-kube-api-access-2ctr7\") pod \"community-operators-xzgxj\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:40 crc kubenswrapper[4982]: I0224 15:57:40.919940 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:41 crc kubenswrapper[4982]: W0224 15:57:41.440245 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fbe16b0_0124_472a_bead_50b304a21f5a.slice/crio-a243ddca00614daf82219682583bf0503c330b97fec36eb61c0e8d6ae3928d34 WatchSource:0}: Error finding container a243ddca00614daf82219682583bf0503c330b97fec36eb61c0e8d6ae3928d34: Status 404 returned error can't find the container with id a243ddca00614daf82219682583bf0503c330b97fec36eb61c0e8d6ae3928d34 Feb 24 15:57:41 crc kubenswrapper[4982]: I0224 15:57:41.442031 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xzgxj"] Feb 24 15:57:41 crc kubenswrapper[4982]: I0224 15:57:41.870474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzgxj" event={"ID":"5fbe16b0-0124-472a-bead-50b304a21f5a","Type":"ContainerStarted","Data":"2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612"} Feb 24 15:57:41 crc kubenswrapper[4982]: I0224 15:57:41.870872 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzgxj" event={"ID":"5fbe16b0-0124-472a-bead-50b304a21f5a","Type":"ContainerStarted","Data":"a243ddca00614daf82219682583bf0503c330b97fec36eb61c0e8d6ae3928d34"} Feb 24 15:57:42 crc kubenswrapper[4982]: I0224 15:57:42.891987 4982 generic.go:334] "Generic (PLEG): container finished" podID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerID="2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612" exitCode=0 Feb 24 15:57:42 crc kubenswrapper[4982]: I0224 15:57:42.892225 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzgxj" event={"ID":"5fbe16b0-0124-472a-bead-50b304a21f5a","Type":"ContainerDied","Data":"2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612"} Feb 24 15:57:43 crc kubenswrapper[4982]: I0224 15:57:43.905374 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzgxj" event={"ID":"5fbe16b0-0124-472a-bead-50b304a21f5a","Type":"ContainerStarted","Data":"13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb"} Feb 24 15:57:45 crc kubenswrapper[4982]: I0224 15:57:45.934385 4982 generic.go:334] "Generic (PLEG): container finished" podID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerID="13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb" exitCode=0 Feb 24 15:57:45 crc kubenswrapper[4982]: I0224 15:57:45.934486 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzgxj" event={"ID":"5fbe16b0-0124-472a-bead-50b304a21f5a","Type":"ContainerDied","Data":"13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb"} Feb 24 15:57:46 crc kubenswrapper[4982]: I0224 15:57:46.948652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzgxj" event={"ID":"5fbe16b0-0124-472a-bead-50b304a21f5a","Type":"ContainerStarted","Data":"9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379"} Feb 24 15:57:46 crc kubenswrapper[4982]: I0224 15:57:46.976963 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xzgxj" podStartSLOduration=3.460967199 podStartE2EDuration="6.976941405s" podCreationTimestamp="2026-02-24 15:57:40 +0000 UTC" firstStartedPulling="2026-02-24 15:57:42.897666367 +0000 UTC m=+4124.516724890" lastFinishedPulling="2026-02-24 15:57:46.413640593 +0000 UTC m=+4128.032699096" observedRunningTime="2026-02-24 15:57:46.97196706 +0000 UTC m=+4128.591025563" watchObservedRunningTime="2026-02-24 15:57:46.976941405 +0000 UTC m=+4128.595999898" Feb 24 15:57:48 crc kubenswrapper[4982]: I0224 15:57:48.146144 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:57:48 crc kubenswrapper[4982]: E0224 15:57:48.147087 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:57:50 crc kubenswrapper[4982]: I0224 15:57:50.920540 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:50 crc kubenswrapper[4982]: I0224 15:57:50.921192 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:50 crc kubenswrapper[4982]: I0224 15:57:50.984405 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:51 crc kubenswrapper[4982]: I0224 15:57:51.079434 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:51 crc kubenswrapper[4982]: I0224 15:57:51.228213 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xzgxj"] Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.031446 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xzgxj" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerName="registry-server" containerID="cri-o://9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379" gracePeriod=2 Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.781763 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.839153 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ctr7\" (UniqueName: \"kubernetes.io/projected/5fbe16b0-0124-472a-bead-50b304a21f5a-kube-api-access-2ctr7\") pod \"5fbe16b0-0124-472a-bead-50b304a21f5a\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.839476 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-catalog-content\") pod \"5fbe16b0-0124-472a-bead-50b304a21f5a\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.839581 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-utilities\") pod \"5fbe16b0-0124-472a-bead-50b304a21f5a\" (UID: \"5fbe16b0-0124-472a-bead-50b304a21f5a\") " Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.841463 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-utilities" (OuterVolumeSpecName: "utilities") pod "5fbe16b0-0124-472a-bead-50b304a21f5a" (UID: "5fbe16b0-0124-472a-bead-50b304a21f5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.851117 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fbe16b0-0124-472a-bead-50b304a21f5a-kube-api-access-2ctr7" (OuterVolumeSpecName: "kube-api-access-2ctr7") pod "5fbe16b0-0124-472a-bead-50b304a21f5a" (UID: "5fbe16b0-0124-472a-bead-50b304a21f5a"). InnerVolumeSpecName "kube-api-access-2ctr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.888198 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fbe16b0-0124-472a-bead-50b304a21f5a" (UID: "5fbe16b0-0124-472a-bead-50b304a21f5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.942650 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.942688 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fbe16b0-0124-472a-bead-50b304a21f5a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 15:57:53 crc kubenswrapper[4982]: I0224 15:57:53.942699 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ctr7\" (UniqueName: \"kubernetes.io/projected/5fbe16b0-0124-472a-bead-50b304a21f5a-kube-api-access-2ctr7\") on node \"crc\" DevicePath \"\"" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.046462 4982 generic.go:334] "Generic (PLEG): container finished" podID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerID="9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379" exitCode=0 Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.046584 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzgxj" event={"ID":"5fbe16b0-0124-472a-bead-50b304a21f5a","Type":"ContainerDied","Data":"9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379"} Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.046679 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xzgxj" event={"ID":"5fbe16b0-0124-472a-bead-50b304a21f5a","Type":"ContainerDied","Data":"a243ddca00614daf82219682583bf0503c330b97fec36eb61c0e8d6ae3928d34"} Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.046711 4982 scope.go:117] "RemoveContainer" containerID="9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.046917 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xzgxj" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.096852 4982 scope.go:117] "RemoveContainer" containerID="13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.100808 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xzgxj"] Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.111956 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xzgxj"] Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.133004 4982 scope.go:117] "RemoveContainer" containerID="2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.212370 4982 scope.go:117] "RemoveContainer" containerID="9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379" Feb 24 15:57:54 crc kubenswrapper[4982]: E0224 15:57:54.213289 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379\": container with ID starting with 9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379 not found: ID does not exist" containerID="9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.213343 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379"} err="failed to get container status \"9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379\": rpc error: code = NotFound desc = could not find container \"9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379\": container with ID starting with 9706e341ed1e52910fc0ec7f46852363a4d91fea1953671b4f051b11a4bec379 not found: ID does not exist" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.213375 4982 scope.go:117] "RemoveContainer" containerID="13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb" Feb 24 15:57:54 crc kubenswrapper[4982]: E0224 15:57:54.214382 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb\": container with ID starting with 13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb not found: ID does not exist" containerID="13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.214515 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb"} err="failed to get container status \"13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb\": rpc error: code = NotFound desc = could not find container \"13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb\": container with ID starting with 13ddf9a0ad9421f365101a63d4b067e3617f008f61c30da82ccab60f23a94bbb not found: ID does not exist" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.214603 4982 scope.go:117] "RemoveContainer" containerID="2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612" Feb 24 15:57:54 crc kubenswrapper[4982]: E0224 15:57:54.215073 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612\": container with ID starting with 2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612 not found: ID does not exist" containerID="2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612" Feb 24 15:57:54 crc kubenswrapper[4982]: I0224 15:57:54.215109 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612"} err="failed to get container status \"2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612\": rpc error: code = NotFound desc = could not find container \"2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612\": container with ID starting with 2166cc1e0abc766c19682c63de6380618f897d190ce46c9ab4ad59659bb9b612 not found: ID does not exist" Feb 24 15:57:55 crc kubenswrapper[4982]: I0224 15:57:55.156767 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" path="/var/lib/kubelet/pods/5fbe16b0-0124-472a-bead-50b304a21f5a/volumes" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.159993 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532478-75s6j"] Feb 24 15:58:00 crc kubenswrapper[4982]: E0224 15:58:00.161790 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerName="registry-server" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.161813 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerName="registry-server" Feb 24 15:58:00 crc kubenswrapper[4982]: E0224 15:58:00.161834 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerName="extract-utilities" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.161846 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerName="extract-utilities" Feb 24 15:58:00 crc kubenswrapper[4982]: E0224 15:58:00.161877 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerName="extract-content" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.161889 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerName="extract-content" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.162317 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fbe16b0-0124-472a-bead-50b304a21f5a" containerName="registry-server" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.163594 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532478-75s6j" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.166434 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.166535 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.166639 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.170210 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532478-75s6j"] Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.301583 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhjlr\" (UniqueName: \"kubernetes.io/projected/907d4519-d060-495b-8cba-a2cc7ea757ee-kube-api-access-rhjlr\") pod \"auto-csr-approver-29532478-75s6j\" (UID: \"907d4519-d060-495b-8cba-a2cc7ea757ee\") " pod="openshift-infra/auto-csr-approver-29532478-75s6j" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.405042 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhjlr\" (UniqueName: \"kubernetes.io/projected/907d4519-d060-495b-8cba-a2cc7ea757ee-kube-api-access-rhjlr\") pod \"auto-csr-approver-29532478-75s6j\" (UID: \"907d4519-d060-495b-8cba-a2cc7ea757ee\") " pod="openshift-infra/auto-csr-approver-29532478-75s6j" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.428121 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhjlr\" (UniqueName: \"kubernetes.io/projected/907d4519-d060-495b-8cba-a2cc7ea757ee-kube-api-access-rhjlr\") pod \"auto-csr-approver-29532478-75s6j\" (UID: \"907d4519-d060-495b-8cba-a2cc7ea757ee\") " pod="openshift-infra/auto-csr-approver-29532478-75s6j" Feb 24 15:58:00 crc kubenswrapper[4982]: I0224 15:58:00.484446 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532478-75s6j" Feb 24 15:58:02 crc kubenswrapper[4982]: I0224 15:58:02.038731 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" podUID="d7687408-a30d-42c8-826f-759659e87262" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:58:02 crc kubenswrapper[4982]: I0224 15:58:02.039201 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" podUID="c0967978-25a6-416a-81be-1153d5f5f74b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:58:02 crc kubenswrapper[4982]: I0224 15:58:02.039246 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-pmh6c" podUID="d7687408-a30d-42c8-826f-759659e87262" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:58:02 crc kubenswrapper[4982]: I0224 15:58:02.039280 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-rgfzx" podUID="c0967978-25a6-416a-81be-1153d5f5f74b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:58:02 crc kubenswrapper[4982]: I0224 15:58:02.206914 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" podUID="cdca3167-b0ff-41e3-8802-02d92f829aff" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:58:02 crc kubenswrapper[4982]: I0224 15:58:02.208255 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xk8mt" podUID="cdca3167-b0ff-41e3-8802-02d92f829aff" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 15:58:02 crc kubenswrapper[4982]: E0224 15:58:02.243424 4982 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.099s" Feb 24 15:58:02 crc kubenswrapper[4982]: I0224 15:58:02.244543 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:58:02 crc kubenswrapper[4982]: E0224 15:58:02.245095 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:58:02 crc kubenswrapper[4982]: I0224 15:58:02.650171 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532478-75s6j"] Feb 24 15:58:03 crc kubenswrapper[4982]: I0224 15:58:03.270735 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532478-75s6j" event={"ID":"907d4519-d060-495b-8cba-a2cc7ea757ee","Type":"ContainerStarted","Data":"32fd69271d32f92bf44f40d7c3f19acb694399e209733e87d6d693d5e0c000bc"} Feb 24 15:58:04 crc kubenswrapper[4982]: I0224 15:58:04.282612 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532478-75s6j" event={"ID":"907d4519-d060-495b-8cba-a2cc7ea757ee","Type":"ContainerStarted","Data":"2e338ea674451a7c809e8d72fd0bd7529e42db4fa848689829934df973e26006"} Feb 24 15:58:04 crc kubenswrapper[4982]: I0224 15:58:04.306164 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532478-75s6j" podStartSLOduration=3.22867513 podStartE2EDuration="4.306142038s" podCreationTimestamp="2026-02-24 15:58:00 +0000 UTC" firstStartedPulling="2026-02-24 15:58:02.650164993 +0000 UTC m=+4144.269223496" lastFinishedPulling="2026-02-24 15:58:03.727631891 +0000 UTC m=+4145.346690404" observedRunningTime="2026-02-24 15:58:04.299324782 +0000 UTC m=+4145.918383275" watchObservedRunningTime="2026-02-24 15:58:04.306142038 +0000 UTC m=+4145.925200531" Feb 24 15:58:05 crc kubenswrapper[4982]: I0224 15:58:05.305621 4982 generic.go:334] "Generic (PLEG): container finished" podID="907d4519-d060-495b-8cba-a2cc7ea757ee" containerID="2e338ea674451a7c809e8d72fd0bd7529e42db4fa848689829934df973e26006" exitCode=0 Feb 24 15:58:05 crc kubenswrapper[4982]: I0224 15:58:05.305791 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532478-75s6j" event={"ID":"907d4519-d060-495b-8cba-a2cc7ea757ee","Type":"ContainerDied","Data":"2e338ea674451a7c809e8d72fd0bd7529e42db4fa848689829934df973e26006"} Feb 24 15:58:06 crc kubenswrapper[4982]: I0224 15:58:06.779986 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532478-75s6j" Feb 24 15:58:06 crc kubenswrapper[4982]: I0224 15:58:06.898082 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhjlr\" (UniqueName: \"kubernetes.io/projected/907d4519-d060-495b-8cba-a2cc7ea757ee-kube-api-access-rhjlr\") pod \"907d4519-d060-495b-8cba-a2cc7ea757ee\" (UID: \"907d4519-d060-495b-8cba-a2cc7ea757ee\") " Feb 24 15:58:06 crc kubenswrapper[4982]: I0224 15:58:06.913317 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907d4519-d060-495b-8cba-a2cc7ea757ee-kube-api-access-rhjlr" (OuterVolumeSpecName: "kube-api-access-rhjlr") pod "907d4519-d060-495b-8cba-a2cc7ea757ee" (UID: "907d4519-d060-495b-8cba-a2cc7ea757ee"). InnerVolumeSpecName "kube-api-access-rhjlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 15:58:07 crc kubenswrapper[4982]: I0224 15:58:07.001922 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhjlr\" (UniqueName: \"kubernetes.io/projected/907d4519-d060-495b-8cba-a2cc7ea757ee-kube-api-access-rhjlr\") on node \"crc\" DevicePath \"\"" Feb 24 15:58:07 crc kubenswrapper[4982]: I0224 15:58:07.341010 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532478-75s6j" event={"ID":"907d4519-d060-495b-8cba-a2cc7ea757ee","Type":"ContainerDied","Data":"32fd69271d32f92bf44f40d7c3f19acb694399e209733e87d6d693d5e0c000bc"} Feb 24 15:58:07 crc kubenswrapper[4982]: I0224 15:58:07.341056 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532478-75s6j" Feb 24 15:58:07 crc kubenswrapper[4982]: I0224 15:58:07.341056 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32fd69271d32f92bf44f40d7c3f19acb694399e209733e87d6d693d5e0c000bc" Feb 24 15:58:07 crc kubenswrapper[4982]: I0224 15:58:07.383612 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532472-sbj2h"] Feb 24 15:58:07 crc kubenswrapper[4982]: I0224 15:58:07.401057 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532472-sbj2h"] Feb 24 15:58:09 crc kubenswrapper[4982]: I0224 15:58:09.159411 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb003bfc-c3b5-4200-afb4-6f1a513fedee" path="/var/lib/kubelet/pods/bb003bfc-c3b5-4200-afb4-6f1a513fedee/volumes" Feb 24 15:58:17 crc kubenswrapper[4982]: I0224 15:58:17.145541 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:58:17 crc kubenswrapper[4982]: E0224 15:58:17.146404 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:58:31 crc kubenswrapper[4982]: I0224 15:58:31.146016 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:58:31 crc kubenswrapper[4982]: E0224 15:58:31.146721 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:58:46 crc kubenswrapper[4982]: I0224 15:58:46.146718 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:58:46 crc kubenswrapper[4982]: E0224 15:58:46.148214 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:58:50 crc kubenswrapper[4982]: I0224 15:58:50.158455 4982 scope.go:117] "RemoveContainer" containerID="dd93c6de294749bfaf4e5f5f6ab3a5592ff0bb9b41436fab16dbbd17ad84ebaa" Feb 24 15:58:57 crc kubenswrapper[4982]: I0224 15:58:57.146208 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:58:57 crc kubenswrapper[4982]: E0224 15:58:57.146909 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 15:59:12 crc kubenswrapper[4982]: I0224 15:59:12.146630 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 15:59:13 crc kubenswrapper[4982]: I0224 15:59:13.110561 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"66de4f83f8b681be75748135f58476551b9f14aee0880491d293552e745c9035"} Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.157202 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh"] Feb 24 16:00:00 crc kubenswrapper[4982]: E0224 16:00:00.158651 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907d4519-d060-495b-8cba-a2cc7ea757ee" containerName="oc" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.158675 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="907d4519-d060-495b-8cba-a2cc7ea757ee" containerName="oc" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.159075 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="907d4519-d060-495b-8cba-a2cc7ea757ee" containerName="oc" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.160515 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.163922 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.164196 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.173943 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532480-5h625"] Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.175447 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532480-5h625" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.178706 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.178778 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.180191 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.193719 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532480-5h625"] Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.218560 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh"] Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.279762 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx8s9\" (UniqueName: \"kubernetes.io/projected/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-kube-api-access-nx8s9\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.279813 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-secret-volume\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.279840 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9fgn\" (UniqueName: \"kubernetes.io/projected/19a8be4e-9329-41e5-bcc7-50e8dbc8026f-kube-api-access-q9fgn\") pod \"auto-csr-approver-29532480-5h625\" (UID: \"19a8be4e-9329-41e5-bcc7-50e8dbc8026f\") " pod="openshift-infra/auto-csr-approver-29532480-5h625" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.279899 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-config-volume\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.381953 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx8s9\" (UniqueName: \"kubernetes.io/projected/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-kube-api-access-nx8s9\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.381997 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-secret-volume\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.382020 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9fgn\" (UniqueName: \"kubernetes.io/projected/19a8be4e-9329-41e5-bcc7-50e8dbc8026f-kube-api-access-q9fgn\") pod \"auto-csr-approver-29532480-5h625\" (UID: \"19a8be4e-9329-41e5-bcc7-50e8dbc8026f\") " pod="openshift-infra/auto-csr-approver-29532480-5h625" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.382073 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-config-volume\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.382999 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-config-volume\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.388767 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-secret-volume\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.398907 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9fgn\" (UniqueName: \"kubernetes.io/projected/19a8be4e-9329-41e5-bcc7-50e8dbc8026f-kube-api-access-q9fgn\") pod \"auto-csr-approver-29532480-5h625\" (UID: \"19a8be4e-9329-41e5-bcc7-50e8dbc8026f\") " pod="openshift-infra/auto-csr-approver-29532480-5h625" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.401304 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx8s9\" (UniqueName: \"kubernetes.io/projected/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-kube-api-access-nx8s9\") pod \"collect-profiles-29532480-5hfvh\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.493666 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:00 crc kubenswrapper[4982]: I0224 16:00:00.505214 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532480-5h625" Feb 24 16:00:01 crc kubenswrapper[4982]: I0224 16:00:01.027053 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh"] Feb 24 16:00:01 crc kubenswrapper[4982]: I0224 16:00:01.039799 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532480-5h625"] Feb 24 16:00:01 crc kubenswrapper[4982]: I0224 16:00:01.055048 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:00:01 crc kubenswrapper[4982]: I0224 16:00:01.737974 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532480-5h625" event={"ID":"19a8be4e-9329-41e5-bcc7-50e8dbc8026f","Type":"ContainerStarted","Data":"bd61cd5ceab273ff8ea76ab2d5a8ffda29b1a59dcea9bd861ec073a4920122c2"} Feb 24 16:00:01 crc kubenswrapper[4982]: I0224 16:00:01.740771 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" event={"ID":"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3","Type":"ContainerStarted","Data":"4024d0dc8573b0593e580da50527902e8b32c770490c99accdf35118b9a89524"} Feb 24 16:00:01 crc kubenswrapper[4982]: I0224 16:00:01.740795 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" event={"ID":"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3","Type":"ContainerStarted","Data":"2761a2db2baad25cdcf75814c813b74a2adec7fdbf5070e6e300106f58746444"} Feb 24 16:00:01 crc kubenswrapper[4982]: I0224 16:00:01.758378 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" podStartSLOduration=1.758365499 podStartE2EDuration="1.758365499s" podCreationTimestamp="2026-02-24 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 16:00:01.75176294 +0000 UTC m=+4263.370821433" watchObservedRunningTime="2026-02-24 16:00:01.758365499 +0000 UTC m=+4263.377423992" Feb 24 16:00:02 crc kubenswrapper[4982]: I0224 16:00:02.769644 4982 generic.go:334] "Generic (PLEG): container finished" podID="c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3" containerID="4024d0dc8573b0593e580da50527902e8b32c770490c99accdf35118b9a89524" exitCode=0 Feb 24 16:00:02 crc kubenswrapper[4982]: I0224 16:00:02.769943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" event={"ID":"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3","Type":"ContainerDied","Data":"4024d0dc8573b0593e580da50527902e8b32c770490c99accdf35118b9a89524"} Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.250313 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.382181 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-secret-volume\") pod \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.382391 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-config-volume\") pod \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.382441 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx8s9\" (UniqueName: \"kubernetes.io/projected/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-kube-api-access-nx8s9\") pod \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\" (UID: \"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3\") " Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.383356 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-config-volume" (OuterVolumeSpecName: "config-volume") pod "c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3" (UID: "c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.384487 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.388790 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3" (UID: "c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.398346 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-kube-api-access-nx8s9" (OuterVolumeSpecName: "kube-api-access-nx8s9") pod "c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3" (UID: "c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3"). InnerVolumeSpecName "kube-api-access-nx8s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.485513 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.485550 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx8s9\" (UniqueName: \"kubernetes.io/projected/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3-kube-api-access-nx8s9\") on node \"crc\" DevicePath \"\"" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.802165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" event={"ID":"c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3","Type":"ContainerDied","Data":"2761a2db2baad25cdcf75814c813b74a2adec7fdbf5070e6e300106f58746444"} Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.802206 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2761a2db2baad25cdcf75814c813b74a2adec7fdbf5070e6e300106f58746444" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.802290 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh" Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.846882 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh"] Feb 24 16:00:04 crc kubenswrapper[4982]: I0224 16:00:04.855886 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532435-2fszh"] Feb 24 16:00:05 crc kubenswrapper[4982]: I0224 16:00:05.160405 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b1a558-6f90-43fc-8319-c988ad2c3a1d" path="/var/lib/kubelet/pods/70b1a558-6f90-43fc-8319-c988ad2c3a1d/volumes" Feb 24 16:00:10 crc kubenswrapper[4982]: I0224 16:00:10.870636 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532480-5h625" event={"ID":"19a8be4e-9329-41e5-bcc7-50e8dbc8026f","Type":"ContainerStarted","Data":"019dd81c02c07af4791fbc0105d132e235dd02cb2add7f4f093c16ad09286577"} Feb 24 16:00:10 crc kubenswrapper[4982]: I0224 16:00:10.899928 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532480-5h625" podStartSLOduration=1.965602904 podStartE2EDuration="10.899903101s" podCreationTimestamp="2026-02-24 16:00:00 +0000 UTC" firstStartedPulling="2026-02-24 16:00:01.054793156 +0000 UTC m=+4262.673851649" lastFinishedPulling="2026-02-24 16:00:09.989093353 +0000 UTC m=+4271.608151846" observedRunningTime="2026-02-24 16:00:10.886171138 +0000 UTC m=+4272.505229641" watchObservedRunningTime="2026-02-24 16:00:10.899903101 +0000 UTC m=+4272.518961604" Feb 24 16:00:11 crc kubenswrapper[4982]: I0224 16:00:11.901679 4982 generic.go:334] "Generic (PLEG): container finished" podID="19a8be4e-9329-41e5-bcc7-50e8dbc8026f" containerID="019dd81c02c07af4791fbc0105d132e235dd02cb2add7f4f093c16ad09286577" exitCode=0 Feb 24 16:00:11 crc kubenswrapper[4982]: I0224 16:00:11.901965 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532480-5h625" event={"ID":"19a8be4e-9329-41e5-bcc7-50e8dbc8026f","Type":"ContainerDied","Data":"019dd81c02c07af4791fbc0105d132e235dd02cb2add7f4f093c16ad09286577"} Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.698440 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532480-5h625" Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.850226 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9fgn\" (UniqueName: \"kubernetes.io/projected/19a8be4e-9329-41e5-bcc7-50e8dbc8026f-kube-api-access-q9fgn\") pod \"19a8be4e-9329-41e5-bcc7-50e8dbc8026f\" (UID: \"19a8be4e-9329-41e5-bcc7-50e8dbc8026f\") " Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.862312 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a8be4e-9329-41e5-bcc7-50e8dbc8026f-kube-api-access-q9fgn" (OuterVolumeSpecName: "kube-api-access-q9fgn") pod "19a8be4e-9329-41e5-bcc7-50e8dbc8026f" (UID: "19a8be4e-9329-41e5-bcc7-50e8dbc8026f"). InnerVolumeSpecName "kube-api-access-q9fgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.932731 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532480-5h625" event={"ID":"19a8be4e-9329-41e5-bcc7-50e8dbc8026f","Type":"ContainerDied","Data":"bd61cd5ceab273ff8ea76ab2d5a8ffda29b1a59dcea9bd861ec073a4920122c2"} Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.932777 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd61cd5ceab273ff8ea76ab2d5a8ffda29b1a59dcea9bd861ec073a4920122c2" Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.932800 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532480-5h625" Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.953469 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9fgn\" (UniqueName: \"kubernetes.io/projected/19a8be4e-9329-41e5-bcc7-50e8dbc8026f-kube-api-access-q9fgn\") on node \"crc\" DevicePath \"\"" Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.965670 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532474-z688f"] Feb 24 16:00:13 crc kubenswrapper[4982]: I0224 16:00:13.974768 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532474-z688f"] Feb 24 16:00:15 crc kubenswrapper[4982]: I0224 16:00:15.169151 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf171e-d79a-4311-a224-14e17384565c" path="/var/lib/kubelet/pods/87cf171e-d79a-4311-a224-14e17384565c/volumes" Feb 24 16:00:50 crc kubenswrapper[4982]: I0224 16:00:50.298855 4982 scope.go:117] "RemoveContainer" containerID="f74ae4766ded6c82913e3b0ec76971925b598ebc17ef701f9ff199996ff997ba" Feb 24 16:00:51 crc kubenswrapper[4982]: I0224 16:00:51.202885 4982 scope.go:117] "RemoveContainer" containerID="69548f0616e983abd01e19c9de2e2a8cebe06073987236aa63ec46a8433f14e3" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.235579 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29532481-7srlf"] Feb 24 16:01:00 crc kubenswrapper[4982]: E0224 16:01:00.236817 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3" containerName="collect-profiles" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.236830 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3" containerName="collect-profiles" Feb 24 16:01:00 crc kubenswrapper[4982]: E0224 16:01:00.236876 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a8be4e-9329-41e5-bcc7-50e8dbc8026f" containerName="oc" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.236883 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a8be4e-9329-41e5-bcc7-50e8dbc8026f" containerName="oc" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.237302 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a8be4e-9329-41e5-bcc7-50e8dbc8026f" containerName="oc" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.237332 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3" containerName="collect-profiles" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.238342 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.267239 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29532481-7srlf"] Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.409893 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-fernet-keys\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.409977 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-config-data\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.410007 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-combined-ca-bundle\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.410102 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqtkn\" (UniqueName: \"kubernetes.io/projected/e79089ba-59a1-4b26-b33f-9f4b5406d41e-kube-api-access-wqtkn\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.512112 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-fernet-keys\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.512222 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-config-data\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.512268 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-combined-ca-bundle\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.512379 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqtkn\" (UniqueName: \"kubernetes.io/projected/e79089ba-59a1-4b26-b33f-9f4b5406d41e-kube-api-access-wqtkn\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.518885 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-config-data\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.518984 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-fernet-keys\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.519218 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-combined-ca-bundle\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.527575 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqtkn\" (UniqueName: \"kubernetes.io/projected/e79089ba-59a1-4b26-b33f-9f4b5406d41e-kube-api-access-wqtkn\") pod \"keystone-cron-29532481-7srlf\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:00 crc kubenswrapper[4982]: I0224 16:01:00.626945 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:01 crc kubenswrapper[4982]: I0224 16:01:01.124623 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29532481-7srlf"] Feb 24 16:01:01 crc kubenswrapper[4982]: I0224 16:01:01.572097 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29532481-7srlf" event={"ID":"e79089ba-59a1-4b26-b33f-9f4b5406d41e","Type":"ContainerStarted","Data":"8bff46371b373f508394da2fcfc66e2d629d7684f4075690650080fee0e9b6a9"} Feb 24 16:01:01 crc kubenswrapper[4982]: I0224 16:01:01.572149 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29532481-7srlf" event={"ID":"e79089ba-59a1-4b26-b33f-9f4b5406d41e","Type":"ContainerStarted","Data":"c65d07773607f154732d6729bcddef9daa733b76fda033eddd06c9c3fd9b0fef"} Feb 24 16:01:05 crc kubenswrapper[4982]: I0224 16:01:05.615237 4982 generic.go:334] "Generic (PLEG): container finished" podID="e79089ba-59a1-4b26-b33f-9f4b5406d41e" containerID="8bff46371b373f508394da2fcfc66e2d629d7684f4075690650080fee0e9b6a9" exitCode=0 Feb 24 16:01:05 crc kubenswrapper[4982]: I0224 16:01:05.615367 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29532481-7srlf" event={"ID":"e79089ba-59a1-4b26-b33f-9f4b5406d41e","Type":"ContainerDied","Data":"8bff46371b373f508394da2fcfc66e2d629d7684f4075690650080fee0e9b6a9"} Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.084359 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.190083 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-config-data\") pod \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.190271 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqtkn\" (UniqueName: \"kubernetes.io/projected/e79089ba-59a1-4b26-b33f-9f4b5406d41e-kube-api-access-wqtkn\") pod \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.190345 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-fernet-keys\") pod \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.190521 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-combined-ca-bundle\") pod \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\" (UID: \"e79089ba-59a1-4b26-b33f-9f4b5406d41e\") " Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.197314 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79089ba-59a1-4b26-b33f-9f4b5406d41e-kube-api-access-wqtkn" (OuterVolumeSpecName: "kube-api-access-wqtkn") pod "e79089ba-59a1-4b26-b33f-9f4b5406d41e" (UID: "e79089ba-59a1-4b26-b33f-9f4b5406d41e"). InnerVolumeSpecName "kube-api-access-wqtkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.198100 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e79089ba-59a1-4b26-b33f-9f4b5406d41e" (UID: "e79089ba-59a1-4b26-b33f-9f4b5406d41e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.224110 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e79089ba-59a1-4b26-b33f-9f4b5406d41e" (UID: "e79089ba-59a1-4b26-b33f-9f4b5406d41e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.275001 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-config-data" (OuterVolumeSpecName: "config-data") pod "e79089ba-59a1-4b26-b33f-9f4b5406d41e" (UID: "e79089ba-59a1-4b26-b33f-9f4b5406d41e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.294551 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.294824 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqtkn\" (UniqueName: \"kubernetes.io/projected/e79089ba-59a1-4b26-b33f-9f4b5406d41e-kube-api-access-wqtkn\") on node \"crc\" DevicePath \"\"" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.295068 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.295111 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79089ba-59a1-4b26-b33f-9f4b5406d41e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.644993 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29532481-7srlf" event={"ID":"e79089ba-59a1-4b26-b33f-9f4b5406d41e","Type":"ContainerDied","Data":"c65d07773607f154732d6729bcddef9daa733b76fda033eddd06c9c3fd9b0fef"} Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.645374 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65d07773607f154732d6729bcddef9daa733b76fda033eddd06c9c3fd9b0fef" Feb 24 16:01:07 crc kubenswrapper[4982]: I0224 16:01:07.645049 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29532481-7srlf" Feb 24 16:01:38 crc kubenswrapper[4982]: I0224 16:01:38.738708 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:01:38 crc kubenswrapper[4982]: I0224 16:01:38.739452 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.176851 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532482-zdzqf"] Feb 24 16:02:00 crc kubenswrapper[4982]: E0224 16:02:00.178506 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79089ba-59a1-4b26-b33f-9f4b5406d41e" containerName="keystone-cron" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.178574 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79089ba-59a1-4b26-b33f-9f4b5406d41e" containerName="keystone-cron" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.179125 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79089ba-59a1-4b26-b33f-9f4b5406d41e" containerName="keystone-cron" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.180772 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532482-zdzqf" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.184238 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.184426 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.184574 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.188934 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532482-zdzqf"] Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.234570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lfxf\" (UniqueName: \"kubernetes.io/projected/d54424d9-e6c7-41a2-be2a-31be457bffd7-kube-api-access-8lfxf\") pod \"auto-csr-approver-29532482-zdzqf\" (UID: \"d54424d9-e6c7-41a2-be2a-31be457bffd7\") " pod="openshift-infra/auto-csr-approver-29532482-zdzqf" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.337100 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lfxf\" (UniqueName: \"kubernetes.io/projected/d54424d9-e6c7-41a2-be2a-31be457bffd7-kube-api-access-8lfxf\") pod \"auto-csr-approver-29532482-zdzqf\" (UID: \"d54424d9-e6c7-41a2-be2a-31be457bffd7\") " pod="openshift-infra/auto-csr-approver-29532482-zdzqf" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.361987 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lfxf\" (UniqueName: \"kubernetes.io/projected/d54424d9-e6c7-41a2-be2a-31be457bffd7-kube-api-access-8lfxf\") pod \"auto-csr-approver-29532482-zdzqf\" (UID: \"d54424d9-e6c7-41a2-be2a-31be457bffd7\") " pod="openshift-infra/auto-csr-approver-29532482-zdzqf" Feb 24 16:02:00 crc kubenswrapper[4982]: I0224 16:02:00.507109 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532482-zdzqf" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.041194 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532482-zdzqf"] Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.142159 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-klpdl"] Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.145693 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.172117 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klpdl"] Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.257765 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-catalog-content\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.257852 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-utilities\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.258606 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbtt\" (UniqueName: \"kubernetes.io/projected/f684b339-eaff-433e-8f63-764e171dd50c-kube-api-access-rzbtt\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.361187 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzbtt\" (UniqueName: \"kubernetes.io/projected/f684b339-eaff-433e-8f63-764e171dd50c-kube-api-access-rzbtt\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.361377 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-catalog-content\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.361434 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-utilities\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.362028 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-catalog-content\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.362533 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-utilities\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.388608 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzbtt\" (UniqueName: \"kubernetes.io/projected/f684b339-eaff-433e-8f63-764e171dd50c-kube-api-access-rzbtt\") pod \"certified-operators-klpdl\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.409174 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532482-zdzqf" event={"ID":"d54424d9-e6c7-41a2-be2a-31be457bffd7","Type":"ContainerStarted","Data":"7fdd0cf2326fc5e0bf9a665e7a7ccb129dc9ff945c611109dfe66ed80777f7d8"} Feb 24 16:02:01 crc kubenswrapper[4982]: I0224 16:02:01.473225 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:02 crc kubenswrapper[4982]: I0224 16:02:02.030979 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klpdl"] Feb 24 16:02:02 crc kubenswrapper[4982]: W0224 16:02:02.035095 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf684b339_eaff_433e_8f63_764e171dd50c.slice/crio-16f6afc02c3584c4cd0dcedf4a325c56e62d99dc8e244f11a6346cc5180b70e5 WatchSource:0}: Error finding container 16f6afc02c3584c4cd0dcedf4a325c56e62d99dc8e244f11a6346cc5180b70e5: Status 404 returned error can't find the container with id 16f6afc02c3584c4cd0dcedf4a325c56e62d99dc8e244f11a6346cc5180b70e5 Feb 24 16:02:02 crc kubenswrapper[4982]: I0224 16:02:02.447742 4982 generic.go:334] "Generic (PLEG): container finished" podID="f684b339-eaff-433e-8f63-764e171dd50c" containerID="3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e" exitCode=0 Feb 24 16:02:02 crc kubenswrapper[4982]: I0224 16:02:02.447992 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klpdl" event={"ID":"f684b339-eaff-433e-8f63-764e171dd50c","Type":"ContainerDied","Data":"3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e"} Feb 24 16:02:02 crc kubenswrapper[4982]: I0224 16:02:02.448016 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klpdl" event={"ID":"f684b339-eaff-433e-8f63-764e171dd50c","Type":"ContainerStarted","Data":"16f6afc02c3584c4cd0dcedf4a325c56e62d99dc8e244f11a6346cc5180b70e5"} Feb 24 16:02:03 crc kubenswrapper[4982]: I0224 16:02:03.462091 4982 generic.go:334] "Generic (PLEG): container finished" podID="d54424d9-e6c7-41a2-be2a-31be457bffd7" containerID="13a41bdfaec9ac13e9f4aa307b6ac6773b1867914c0f78c611a9af6f3486ed19" exitCode=0 Feb 24 16:02:03 crc kubenswrapper[4982]: I0224 16:02:03.462187 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532482-zdzqf" event={"ID":"d54424d9-e6c7-41a2-be2a-31be457bffd7","Type":"ContainerDied","Data":"13a41bdfaec9ac13e9f4aa307b6ac6773b1867914c0f78c611a9af6f3486ed19"} Feb 24 16:02:03 crc kubenswrapper[4982]: I0224 16:02:03.469915 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klpdl" event={"ID":"f684b339-eaff-433e-8f63-764e171dd50c","Type":"ContainerStarted","Data":"f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1"} Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.009504 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532482-zdzqf" Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.073627 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lfxf\" (UniqueName: \"kubernetes.io/projected/d54424d9-e6c7-41a2-be2a-31be457bffd7-kube-api-access-8lfxf\") pod \"d54424d9-e6c7-41a2-be2a-31be457bffd7\" (UID: \"d54424d9-e6c7-41a2-be2a-31be457bffd7\") " Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.083488 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54424d9-e6c7-41a2-be2a-31be457bffd7-kube-api-access-8lfxf" (OuterVolumeSpecName: "kube-api-access-8lfxf") pod "d54424d9-e6c7-41a2-be2a-31be457bffd7" (UID: "d54424d9-e6c7-41a2-be2a-31be457bffd7"). InnerVolumeSpecName "kube-api-access-8lfxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.178341 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lfxf\" (UniqueName: \"kubernetes.io/projected/d54424d9-e6c7-41a2-be2a-31be457bffd7-kube-api-access-8lfxf\") on node \"crc\" DevicePath \"\"" Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.495186 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532482-zdzqf" Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.495196 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532482-zdzqf" event={"ID":"d54424d9-e6c7-41a2-be2a-31be457bffd7","Type":"ContainerDied","Data":"7fdd0cf2326fc5e0bf9a665e7a7ccb129dc9ff945c611109dfe66ed80777f7d8"} Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.495243 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fdd0cf2326fc5e0bf9a665e7a7ccb129dc9ff945c611109dfe66ed80777f7d8" Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.499231 4982 generic.go:334] "Generic (PLEG): container finished" podID="f684b339-eaff-433e-8f63-764e171dd50c" containerID="f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1" exitCode=0 Feb 24 16:02:05 crc kubenswrapper[4982]: I0224 16:02:05.499295 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klpdl" event={"ID":"f684b339-eaff-433e-8f63-764e171dd50c","Type":"ContainerDied","Data":"f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1"} Feb 24 16:02:06 crc kubenswrapper[4982]: I0224 16:02:06.120636 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532476-hhpfz"] Feb 24 16:02:06 crc kubenswrapper[4982]: I0224 16:02:06.131551 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532476-hhpfz"] Feb 24 16:02:06 crc kubenswrapper[4982]: I0224 16:02:06.517087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klpdl" event={"ID":"f684b339-eaff-433e-8f63-764e171dd50c","Type":"ContainerStarted","Data":"965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689"} Feb 24 16:02:06 crc kubenswrapper[4982]: I0224 16:02:06.565183 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-klpdl" podStartSLOduration=2.124343394 podStartE2EDuration="5.565155433s" podCreationTimestamp="2026-02-24 16:02:01 +0000 UTC" firstStartedPulling="2026-02-24 16:02:02.455483786 +0000 UTC m=+4384.074542279" lastFinishedPulling="2026-02-24 16:02:05.896295815 +0000 UTC m=+4387.515354318" observedRunningTime="2026-02-24 16:02:06.553343101 +0000 UTC m=+4388.172401594" watchObservedRunningTime="2026-02-24 16:02:06.565155433 +0000 UTC m=+4388.184213936" Feb 24 16:02:07 crc kubenswrapper[4982]: I0224 16:02:07.160924 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715906a3-3aa8-4ea0-82b2-f4d8fe085a04" path="/var/lib/kubelet/pods/715906a3-3aa8-4ea0-82b2-f4d8fe085a04/volumes" Feb 24 16:02:08 crc kubenswrapper[4982]: I0224 16:02:08.737992 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:02:08 crc kubenswrapper[4982]: I0224 16:02:08.738372 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:02:11 crc kubenswrapper[4982]: I0224 16:02:11.473792 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:11 crc kubenswrapper[4982]: I0224 16:02:11.474173 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:11 crc kubenswrapper[4982]: I0224 16:02:11.543817 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:11 crc kubenswrapper[4982]: I0224 16:02:11.662332 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:11 crc kubenswrapper[4982]: I0224 16:02:11.783952 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klpdl"] Feb 24 16:02:13 crc kubenswrapper[4982]: I0224 16:02:13.628210 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-klpdl" podUID="f684b339-eaff-433e-8f63-764e171dd50c" containerName="registry-server" containerID="cri-o://965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689" gracePeriod=2 Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.112084 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.213633 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzbtt\" (UniqueName: \"kubernetes.io/projected/f684b339-eaff-433e-8f63-764e171dd50c-kube-api-access-rzbtt\") pod \"f684b339-eaff-433e-8f63-764e171dd50c\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.214013 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-utilities\") pod \"f684b339-eaff-433e-8f63-764e171dd50c\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.214043 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-catalog-content\") pod \"f684b339-eaff-433e-8f63-764e171dd50c\" (UID: \"f684b339-eaff-433e-8f63-764e171dd50c\") " Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.214942 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-utilities" (OuterVolumeSpecName: "utilities") pod "f684b339-eaff-433e-8f63-764e171dd50c" (UID: "f684b339-eaff-433e-8f63-764e171dd50c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.224990 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f684b339-eaff-433e-8f63-764e171dd50c-kube-api-access-rzbtt" (OuterVolumeSpecName: "kube-api-access-rzbtt") pod "f684b339-eaff-433e-8f63-764e171dd50c" (UID: "f684b339-eaff-433e-8f63-764e171dd50c"). InnerVolumeSpecName "kube-api-access-rzbtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.267802 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f684b339-eaff-433e-8f63-764e171dd50c" (UID: "f684b339-eaff-433e-8f63-764e171dd50c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.316965 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.317000 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f684b339-eaff-433e-8f63-764e171dd50c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.317012 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzbtt\" (UniqueName: \"kubernetes.io/projected/f684b339-eaff-433e-8f63-764e171dd50c-kube-api-access-rzbtt\") on node \"crc\" DevicePath \"\"" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.641054 4982 generic.go:334] "Generic (PLEG): container finished" podID="f684b339-eaff-433e-8f63-764e171dd50c" containerID="965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689" exitCode=0 Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.641118 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klpdl" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.641142 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klpdl" event={"ID":"f684b339-eaff-433e-8f63-764e171dd50c","Type":"ContainerDied","Data":"965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689"} Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.642266 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klpdl" event={"ID":"f684b339-eaff-433e-8f63-764e171dd50c","Type":"ContainerDied","Data":"16f6afc02c3584c4cd0dcedf4a325c56e62d99dc8e244f11a6346cc5180b70e5"} Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.642284 4982 scope.go:117] "RemoveContainer" containerID="965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.671963 4982 scope.go:117] "RemoveContainer" containerID="f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.692764 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klpdl"] Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.703050 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-klpdl"] Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.728686 4982 scope.go:117] "RemoveContainer" containerID="3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.749993 4982 scope.go:117] "RemoveContainer" containerID="965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689" Feb 24 16:02:14 crc kubenswrapper[4982]: E0224 16:02:14.750791 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689\": container with ID starting with 965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689 not found: ID does not exist" containerID="965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.751155 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689"} err="failed to get container status \"965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689\": rpc error: code = NotFound desc = could not find container \"965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689\": container with ID starting with 965be32a87db3e809d51a38fae9e6a7fddab59c580d106d7f847c20f70c7f689 not found: ID does not exist" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.751212 4982 scope.go:117] "RemoveContainer" containerID="f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1" Feb 24 16:02:14 crc kubenswrapper[4982]: E0224 16:02:14.752053 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1\": container with ID starting with f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1 not found: ID does not exist" containerID="f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.752088 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1"} err="failed to get container status \"f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1\": rpc error: code = NotFound desc = could not find container \"f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1\": container with ID starting with f6e00e88fe1fd5654bea912e4c61bcf9b68ec80104ae61d20529a1463bec00d1 not found: ID does not exist" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.752108 4982 scope.go:117] "RemoveContainer" containerID="3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e" Feb 24 16:02:14 crc kubenswrapper[4982]: E0224 16:02:14.752395 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e\": container with ID starting with 3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e not found: ID does not exist" containerID="3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e" Feb 24 16:02:14 crc kubenswrapper[4982]: I0224 16:02:14.752413 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e"} err="failed to get container status \"3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e\": rpc error: code = NotFound desc = could not find container \"3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e\": container with ID starting with 3048d52f18eccd9418dea0e8c64f934155b318b94d4b8ffe0af072423ca3186e not found: ID does not exist" Feb 24 16:02:15 crc kubenswrapper[4982]: I0224 16:02:15.189899 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f684b339-eaff-433e-8f63-764e171dd50c" path="/var/lib/kubelet/pods/f684b339-eaff-433e-8f63-764e171dd50c/volumes" Feb 24 16:02:38 crc kubenswrapper[4982]: I0224 16:02:38.737922 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:02:38 crc kubenswrapper[4982]: I0224 16:02:38.739731 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:02:38 crc kubenswrapper[4982]: I0224 16:02:38.739938 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 16:02:38 crc kubenswrapper[4982]: I0224 16:02:38.741134 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66de4f83f8b681be75748135f58476551b9f14aee0880491d293552e745c9035"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 16:02:38 crc kubenswrapper[4982]: I0224 16:02:38.741353 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://66de4f83f8b681be75748135f58476551b9f14aee0880491d293552e745c9035" gracePeriod=600 Feb 24 16:02:39 crc kubenswrapper[4982]: I0224 16:02:39.959991 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="66de4f83f8b681be75748135f58476551b9f14aee0880491d293552e745c9035" exitCode=0 Feb 24 16:02:39 crc kubenswrapper[4982]: I0224 16:02:39.960057 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"66de4f83f8b681be75748135f58476551b9f14aee0880491d293552e745c9035"} Feb 24 16:02:39 crc kubenswrapper[4982]: I0224 16:02:39.960565 4982 scope.go:117] "RemoveContainer" containerID="f617cee0bcccb6a4ba053ba97c2de7eb8a3326b8596eaaee5e9843b92c62ca62" Feb 24 16:02:40 crc kubenswrapper[4982]: I0224 16:02:40.970801 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af"} Feb 24 16:02:51 crc kubenswrapper[4982]: I0224 16:02:51.375017 4982 scope.go:117] "RemoveContainer" containerID="add0f34626b3df4c7d0163b444889982117d99a9b328e5d6b76d7f6b72ca4927" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.181338 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532484-qkgcl"] Feb 24 16:04:00 crc kubenswrapper[4982]: E0224 16:04:00.186515 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f684b339-eaff-433e-8f63-764e171dd50c" containerName="registry-server" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.187805 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f684b339-eaff-433e-8f63-764e171dd50c" containerName="registry-server" Feb 24 16:04:00 crc kubenswrapper[4982]: E0224 16:04:00.187937 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54424d9-e6c7-41a2-be2a-31be457bffd7" containerName="oc" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.188004 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54424d9-e6c7-41a2-be2a-31be457bffd7" containerName="oc" Feb 24 16:04:00 crc kubenswrapper[4982]: E0224 16:04:00.188075 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f684b339-eaff-433e-8f63-764e171dd50c" containerName="extract-content" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.188140 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f684b339-eaff-433e-8f63-764e171dd50c" containerName="extract-content" Feb 24 16:04:00 crc kubenswrapper[4982]: E0224 16:04:00.188246 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f684b339-eaff-433e-8f63-764e171dd50c" containerName="extract-utilities" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.189420 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f684b339-eaff-433e-8f63-764e171dd50c" containerName="extract-utilities" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.190119 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54424d9-e6c7-41a2-be2a-31be457bffd7" containerName="oc" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.190274 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f684b339-eaff-433e-8f63-764e171dd50c" containerName="registry-server" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.192902 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532484-qkgcl" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.199039 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.199330 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.199799 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.205411 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532484-qkgcl"] Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.309033 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tws\" (UniqueName: \"kubernetes.io/projected/395e1054-ebf9-4f8d-8937-17ac137f9e30-kube-api-access-27tws\") pod \"auto-csr-approver-29532484-qkgcl\" (UID: \"395e1054-ebf9-4f8d-8937-17ac137f9e30\") " pod="openshift-infra/auto-csr-approver-29532484-qkgcl" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.412093 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tws\" (UniqueName: \"kubernetes.io/projected/395e1054-ebf9-4f8d-8937-17ac137f9e30-kube-api-access-27tws\") pod \"auto-csr-approver-29532484-qkgcl\" (UID: \"395e1054-ebf9-4f8d-8937-17ac137f9e30\") " pod="openshift-infra/auto-csr-approver-29532484-qkgcl" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.443484 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tws\" (UniqueName: \"kubernetes.io/projected/395e1054-ebf9-4f8d-8937-17ac137f9e30-kube-api-access-27tws\") pod \"auto-csr-approver-29532484-qkgcl\" (UID: \"395e1054-ebf9-4f8d-8937-17ac137f9e30\") " pod="openshift-infra/auto-csr-approver-29532484-qkgcl" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.517911 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532484-qkgcl" Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.979485 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532484-qkgcl"] Feb 24 16:04:00 crc kubenswrapper[4982]: I0224 16:04:00.997079 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532484-qkgcl" event={"ID":"395e1054-ebf9-4f8d-8937-17ac137f9e30","Type":"ContainerStarted","Data":"c0e7ba72f936c4197e2e40d922659dcda6440935f07d33c0eec472d700424eb3"} Feb 24 16:04:03 crc kubenswrapper[4982]: I0224 16:04:03.019819 4982 generic.go:334] "Generic (PLEG): container finished" podID="395e1054-ebf9-4f8d-8937-17ac137f9e30" containerID="e0bdda07973f5d32e4d218e4b6590e3ae2ebdd6f8ecc6e838e2a5c140bddfd07" exitCode=0 Feb 24 16:04:03 crc kubenswrapper[4982]: I0224 16:04:03.019895 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532484-qkgcl" event={"ID":"395e1054-ebf9-4f8d-8937-17ac137f9e30","Type":"ContainerDied","Data":"e0bdda07973f5d32e4d218e4b6590e3ae2ebdd6f8ecc6e838e2a5c140bddfd07"} Feb 24 16:04:04 crc kubenswrapper[4982]: I0224 16:04:04.516313 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532484-qkgcl" Feb 24 16:04:04 crc kubenswrapper[4982]: I0224 16:04:04.719249 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27tws\" (UniqueName: \"kubernetes.io/projected/395e1054-ebf9-4f8d-8937-17ac137f9e30-kube-api-access-27tws\") pod \"395e1054-ebf9-4f8d-8937-17ac137f9e30\" (UID: \"395e1054-ebf9-4f8d-8937-17ac137f9e30\") " Feb 24 16:04:04 crc kubenswrapper[4982]: I0224 16:04:04.728174 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395e1054-ebf9-4f8d-8937-17ac137f9e30-kube-api-access-27tws" (OuterVolumeSpecName: "kube-api-access-27tws") pod "395e1054-ebf9-4f8d-8937-17ac137f9e30" (UID: "395e1054-ebf9-4f8d-8937-17ac137f9e30"). InnerVolumeSpecName "kube-api-access-27tws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:04:04 crc kubenswrapper[4982]: I0224 16:04:04.823392 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27tws\" (UniqueName: \"kubernetes.io/projected/395e1054-ebf9-4f8d-8937-17ac137f9e30-kube-api-access-27tws\") on node \"crc\" DevicePath \"\"" Feb 24 16:04:05 crc kubenswrapper[4982]: I0224 16:04:05.047541 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532484-qkgcl" event={"ID":"395e1054-ebf9-4f8d-8937-17ac137f9e30","Type":"ContainerDied","Data":"c0e7ba72f936c4197e2e40d922659dcda6440935f07d33c0eec472d700424eb3"} Feb 24 16:04:05 crc kubenswrapper[4982]: I0224 16:04:05.047575 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e7ba72f936c4197e2e40d922659dcda6440935f07d33c0eec472d700424eb3" Feb 24 16:04:05 crc kubenswrapper[4982]: I0224 16:04:05.047639 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532484-qkgcl" Feb 24 16:04:05 crc kubenswrapper[4982]: I0224 16:04:05.612092 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532478-75s6j"] Feb 24 16:04:05 crc kubenswrapper[4982]: I0224 16:04:05.625227 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532478-75s6j"] Feb 24 16:04:07 crc kubenswrapper[4982]: I0224 16:04:07.172482 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907d4519-d060-495b-8cba-a2cc7ea757ee" path="/var/lib/kubelet/pods/907d4519-d060-495b-8cba-a2cc7ea757ee/volumes" Feb 24 16:04:51 crc kubenswrapper[4982]: I0224 16:04:51.549908 4982 scope.go:117] "RemoveContainer" containerID="2e338ea674451a7c809e8d72fd0bd7529e42db4fa848689829934df973e26006" Feb 24 16:05:08 crc kubenswrapper[4982]: I0224 16:05:08.738049 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:05:08 crc kubenswrapper[4982]: I0224 16:05:08.739309 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:05:38 crc kubenswrapper[4982]: I0224 16:05:38.738879 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:05:38 crc kubenswrapper[4982]: I0224 16:05:38.739612 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.150157 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532486-88k27"] Feb 24 16:06:00 crc kubenswrapper[4982]: E0224 16:06:00.151176 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395e1054-ebf9-4f8d-8937-17ac137f9e30" containerName="oc" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.151188 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="395e1054-ebf9-4f8d-8937-17ac137f9e30" containerName="oc" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.151416 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="395e1054-ebf9-4f8d-8937-17ac137f9e30" containerName="oc" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.152444 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532486-88k27" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.156760 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.156917 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.157203 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.175537 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532486-88k27"] Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.232340 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z229l\" (UniqueName: \"kubernetes.io/projected/633787b6-7c71-4eb7-bcce-aaea34ddd03d-kube-api-access-z229l\") pod \"auto-csr-approver-29532486-88k27\" (UID: \"633787b6-7c71-4eb7-bcce-aaea34ddd03d\") " pod="openshift-infra/auto-csr-approver-29532486-88k27" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.334880 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z229l\" (UniqueName: \"kubernetes.io/projected/633787b6-7c71-4eb7-bcce-aaea34ddd03d-kube-api-access-z229l\") pod \"auto-csr-approver-29532486-88k27\" (UID: \"633787b6-7c71-4eb7-bcce-aaea34ddd03d\") " pod="openshift-infra/auto-csr-approver-29532486-88k27" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.355435 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z229l\" (UniqueName: \"kubernetes.io/projected/633787b6-7c71-4eb7-bcce-aaea34ddd03d-kube-api-access-z229l\") pod \"auto-csr-approver-29532486-88k27\" (UID: \"633787b6-7c71-4eb7-bcce-aaea34ddd03d\") " pod="openshift-infra/auto-csr-approver-29532486-88k27" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.478100 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532486-88k27" Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.972475 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532486-88k27"] Feb 24 16:06:00 crc kubenswrapper[4982]: W0224 16:06:00.976181 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod633787b6_7c71_4eb7_bcce_aaea34ddd03d.slice/crio-9208005ea02500b2903627ebef57ea6f0c796fea1152f668f4778bfe8bbd7f89 WatchSource:0}: Error finding container 9208005ea02500b2903627ebef57ea6f0c796fea1152f668f4778bfe8bbd7f89: Status 404 returned error can't find the container with id 9208005ea02500b2903627ebef57ea6f0c796fea1152f668f4778bfe8bbd7f89 Feb 24 16:06:00 crc kubenswrapper[4982]: I0224 16:06:00.979332 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:06:01 crc kubenswrapper[4982]: I0224 16:06:01.563814 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532486-88k27" event={"ID":"633787b6-7c71-4eb7-bcce-aaea34ddd03d","Type":"ContainerStarted","Data":"9208005ea02500b2903627ebef57ea6f0c796fea1152f668f4778bfe8bbd7f89"} Feb 24 16:06:02 crc kubenswrapper[4982]: I0224 16:06:02.577667 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532486-88k27" event={"ID":"633787b6-7c71-4eb7-bcce-aaea34ddd03d","Type":"ContainerStarted","Data":"2d5e049533ab266342b0aa73da90a775123cb47a6f629523ad0e964d55e47c71"} Feb 24 16:06:02 crc kubenswrapper[4982]: I0224 16:06:02.601895 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532486-88k27" podStartSLOduration=1.630261113 podStartE2EDuration="2.601872531s" podCreationTimestamp="2026-02-24 16:06:00 +0000 UTC" firstStartedPulling="2026-02-24 16:06:00.978986994 +0000 UTC m=+4622.598045507" lastFinishedPulling="2026-02-24 16:06:01.950598432 +0000 UTC m=+4623.569656925" observedRunningTime="2026-02-24 16:06:02.59597664 +0000 UTC m=+4624.215035143" watchObservedRunningTime="2026-02-24 16:06:02.601872531 +0000 UTC m=+4624.220931024" Feb 24 16:06:03 crc kubenswrapper[4982]: I0224 16:06:03.590646 4982 generic.go:334] "Generic (PLEG): container finished" podID="633787b6-7c71-4eb7-bcce-aaea34ddd03d" containerID="2d5e049533ab266342b0aa73da90a775123cb47a6f629523ad0e964d55e47c71" exitCode=0 Feb 24 16:06:03 crc kubenswrapper[4982]: I0224 16:06:03.590758 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532486-88k27" event={"ID":"633787b6-7c71-4eb7-bcce-aaea34ddd03d","Type":"ContainerDied","Data":"2d5e049533ab266342b0aa73da90a775123cb47a6f629523ad0e964d55e47c71"} Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.191191 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532486-88k27" Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.281804 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z229l\" (UniqueName: \"kubernetes.io/projected/633787b6-7c71-4eb7-bcce-aaea34ddd03d-kube-api-access-z229l\") pod \"633787b6-7c71-4eb7-bcce-aaea34ddd03d\" (UID: \"633787b6-7c71-4eb7-bcce-aaea34ddd03d\") " Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.290846 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633787b6-7c71-4eb7-bcce-aaea34ddd03d-kube-api-access-z229l" (OuterVolumeSpecName: "kube-api-access-z229l") pod "633787b6-7c71-4eb7-bcce-aaea34ddd03d" (UID: "633787b6-7c71-4eb7-bcce-aaea34ddd03d"). InnerVolumeSpecName "kube-api-access-z229l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.383702 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z229l\" (UniqueName: \"kubernetes.io/projected/633787b6-7c71-4eb7-bcce-aaea34ddd03d-kube-api-access-z229l\") on node \"crc\" DevicePath \"\"" Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.617475 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532486-88k27" event={"ID":"633787b6-7c71-4eb7-bcce-aaea34ddd03d","Type":"ContainerDied","Data":"9208005ea02500b2903627ebef57ea6f0c796fea1152f668f4778bfe8bbd7f89"} Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.617816 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9208005ea02500b2903627ebef57ea6f0c796fea1152f668f4778bfe8bbd7f89" Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.617551 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532486-88k27" Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.679858 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532480-5h625"] Feb 24 16:06:05 crc kubenswrapper[4982]: I0224 16:06:05.694786 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532480-5h625"] Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.157835 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a8be4e-9329-41e5-bcc7-50e8dbc8026f" path="/var/lib/kubelet/pods/19a8be4e-9329-41e5-bcc7-50e8dbc8026f/volumes" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.565957 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gzls6"] Feb 24 16:06:07 crc kubenswrapper[4982]: E0224 16:06:07.566674 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633787b6-7c71-4eb7-bcce-aaea34ddd03d" containerName="oc" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.566700 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="633787b6-7c71-4eb7-bcce-aaea34ddd03d" containerName="oc" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.567013 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="633787b6-7c71-4eb7-bcce-aaea34ddd03d" containerName="oc" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.569253 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.581392 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzls6"] Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.633477 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-catalog-content\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.634448 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-utilities\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.635452 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cpgg\" (UniqueName: \"kubernetes.io/projected/6b1229f1-d8bf-4403-8cc3-b06654574fa7-kube-api-access-7cpgg\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.738677 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cpgg\" (UniqueName: \"kubernetes.io/projected/6b1229f1-d8bf-4403-8cc3-b06654574fa7-kube-api-access-7cpgg\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.739458 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-catalog-content\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.740041 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-catalog-content\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.740285 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-utilities\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.740606 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-utilities\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.756428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cpgg\" (UniqueName: \"kubernetes.io/projected/6b1229f1-d8bf-4403-8cc3-b06654574fa7-kube-api-access-7cpgg\") pod \"redhat-operators-gzls6\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:07 crc kubenswrapper[4982]: I0224 16:06:07.906921 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:08 crc kubenswrapper[4982]: I0224 16:06:08.407762 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzls6"] Feb 24 16:06:08 crc kubenswrapper[4982]: I0224 16:06:08.650166 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzls6" event={"ID":"6b1229f1-d8bf-4403-8cc3-b06654574fa7","Type":"ContainerStarted","Data":"6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9"} Feb 24 16:06:08 crc kubenswrapper[4982]: I0224 16:06:08.650536 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzls6" event={"ID":"6b1229f1-d8bf-4403-8cc3-b06654574fa7","Type":"ContainerStarted","Data":"aadf9215a60a511b54bdb302ff29e04c48d6bd5df4dceef6f6d132c44eb720fc"} Feb 24 16:06:08 crc kubenswrapper[4982]: I0224 16:06:08.738176 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:06:08 crc kubenswrapper[4982]: I0224 16:06:08.738235 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:06:08 crc kubenswrapper[4982]: I0224 16:06:08.738277 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 16:06:08 crc kubenswrapper[4982]: I0224 16:06:08.739119 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 16:06:08 crc kubenswrapper[4982]: I0224 16:06:08.739180 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" gracePeriod=600 Feb 24 16:06:09 crc kubenswrapper[4982]: E0224 16:06:09.315457 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:06:09 crc kubenswrapper[4982]: I0224 16:06:09.671231 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" exitCode=0 Feb 24 16:06:09 crc kubenswrapper[4982]: I0224 16:06:09.671311 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af"} Feb 24 16:06:09 crc kubenswrapper[4982]: I0224 16:06:09.671375 4982 scope.go:117] "RemoveContainer" containerID="66de4f83f8b681be75748135f58476551b9f14aee0880491d293552e745c9035" Feb 24 16:06:09 crc kubenswrapper[4982]: I0224 16:06:09.672251 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:06:09 crc kubenswrapper[4982]: E0224 16:06:09.672601 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:06:09 crc kubenswrapper[4982]: I0224 16:06:09.678082 4982 generic.go:334] "Generic (PLEG): container finished" podID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerID="6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9" exitCode=0 Feb 24 16:06:09 crc kubenswrapper[4982]: I0224 16:06:09.678130 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzls6" event={"ID":"6b1229f1-d8bf-4403-8cc3-b06654574fa7","Type":"ContainerDied","Data":"6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9"} Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.765036 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.767358 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.771839 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.772036 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.772199 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.774353 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fh5tb" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.784402 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.924279 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.925178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.925397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-config-data\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.925638 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.925973 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.926157 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.926345 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgh8\" (UniqueName: \"kubernetes.io/projected/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-kube-api-access-zbgh8\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.926597 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:10 crc kubenswrapper[4982]: I0224 16:06:10.926825 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.029661 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.029738 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.029895 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.029976 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.030003 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-config-data\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.030036 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.030144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.030175 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.030195 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbgh8\" (UniqueName: \"kubernetes.io/projected/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-kube-api-access-zbgh8\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.031132 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.031248 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.031863 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.032496 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-config-data\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.032586 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.040388 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.041489 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.041865 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.058586 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbgh8\" (UniqueName: \"kubernetes.io/projected/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-kube-api-access-zbgh8\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.085285 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.387442 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.702838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzls6" event={"ID":"6b1229f1-d8bf-4403-8cc3-b06654574fa7","Type":"ContainerStarted","Data":"d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f"} Feb 24 16:06:11 crc kubenswrapper[4982]: I0224 16:06:11.885164 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 24 16:06:11 crc kubenswrapper[4982]: W0224 16:06:11.886969 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f07bb24_e52f_4fcc_b7f5_92a4a5e9b3d9.slice/crio-8fd1b058558cce835df51c3f72fc22c11ec4a66cb38c9aa85cd1282912b0b78c WatchSource:0}: Error finding container 8fd1b058558cce835df51c3f72fc22c11ec4a66cb38c9aa85cd1282912b0b78c: Status 404 returned error can't find the container with id 8fd1b058558cce835df51c3f72fc22c11ec4a66cb38c9aa85cd1282912b0b78c Feb 24 16:06:12 crc kubenswrapper[4982]: I0224 16:06:12.714825 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9","Type":"ContainerStarted","Data":"8fd1b058558cce835df51c3f72fc22c11ec4a66cb38c9aa85cd1282912b0b78c"} Feb 24 16:06:20 crc kubenswrapper[4982]: I0224 16:06:20.849476 4982 generic.go:334] "Generic (PLEG): container finished" podID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerID="d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f" exitCode=0 Feb 24 16:06:20 crc kubenswrapper[4982]: I0224 16:06:20.849544 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzls6" event={"ID":"6b1229f1-d8bf-4403-8cc3-b06654574fa7","Type":"ContainerDied","Data":"d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f"} Feb 24 16:06:21 crc kubenswrapper[4982]: I0224 16:06:21.148620 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:06:21 crc kubenswrapper[4982]: E0224 16:06:21.153716 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:06:23 crc kubenswrapper[4982]: I0224 16:06:23.889265 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzls6" event={"ID":"6b1229f1-d8bf-4403-8cc3-b06654574fa7","Type":"ContainerStarted","Data":"7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b"} Feb 24 16:06:23 crc kubenswrapper[4982]: I0224 16:06:23.916790 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gzls6" podStartSLOduration=3.789634863 podStartE2EDuration="16.916769545s" podCreationTimestamp="2026-02-24 16:06:07 +0000 UTC" firstStartedPulling="2026-02-24 16:06:09.680881274 +0000 UTC m=+4631.299939777" lastFinishedPulling="2026-02-24 16:06:22.808015946 +0000 UTC m=+4644.427074459" observedRunningTime="2026-02-24 16:06:23.905623501 +0000 UTC m=+4645.524681994" watchObservedRunningTime="2026-02-24 16:06:23.916769545 +0000 UTC m=+4645.535828038" Feb 24 16:06:27 crc kubenswrapper[4982]: I0224 16:06:27.907240 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:27 crc kubenswrapper[4982]: I0224 16:06:27.907857 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:06:28 crc kubenswrapper[4982]: I0224 16:06:28.972926 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzls6" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" probeResult="failure" output=< Feb 24 16:06:28 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:06:28 crc kubenswrapper[4982]: > Feb 24 16:06:29 crc kubenswrapper[4982]: I0224 16:06:29.823970 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fpvpt"] Feb 24 16:06:29 crc kubenswrapper[4982]: I0224 16:06:29.827432 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:29 crc kubenswrapper[4982]: I0224 16:06:29.835467 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpvpt"] Feb 24 16:06:29 crc kubenswrapper[4982]: I0224 16:06:29.940487 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-catalog-content\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:29 crc kubenswrapper[4982]: I0224 16:06:29.940573 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-utilities\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:29 crc kubenswrapper[4982]: I0224 16:06:29.940618 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46zl\" (UniqueName: \"kubernetes.io/projected/3bd77963-3239-4cc2-b85e-0c36a84b9a71-kube-api-access-s46zl\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:30 crc kubenswrapper[4982]: I0224 16:06:30.047536 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-utilities\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:30 crc kubenswrapper[4982]: I0224 16:06:30.047704 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s46zl\" (UniqueName: \"kubernetes.io/projected/3bd77963-3239-4cc2-b85e-0c36a84b9a71-kube-api-access-s46zl\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:30 crc kubenswrapper[4982]: I0224 16:06:30.048155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-utilities\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:30 crc kubenswrapper[4982]: I0224 16:06:30.048262 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-catalog-content\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:30 crc kubenswrapper[4982]: I0224 16:06:30.060536 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-catalog-content\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:30 crc kubenswrapper[4982]: I0224 16:06:30.097402 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46zl\" (UniqueName: \"kubernetes.io/projected/3bd77963-3239-4cc2-b85e-0c36a84b9a71-kube-api-access-s46zl\") pod \"redhat-marketplace-fpvpt\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:30 crc kubenswrapper[4982]: I0224 16:06:30.163863 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:06:33 crc kubenswrapper[4982]: I0224 16:06:33.147476 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:06:33 crc kubenswrapper[4982]: E0224 16:06:33.148587 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:06:38 crc kubenswrapper[4982]: I0224 16:06:38.960208 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzls6" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" probeResult="failure" output=< Feb 24 16:06:38 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:06:38 crc kubenswrapper[4982]: > Feb 24 16:06:47 crc kubenswrapper[4982]: I0224 16:06:47.146150 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:06:47 crc kubenswrapper[4982]: E0224 16:06:47.146989 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:06:48 crc kubenswrapper[4982]: E0224 16:06:48.569522 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 24 16:06:48 crc kubenswrapper[4982]: E0224 16:06:48.572572 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbgh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 16:06:48 crc kubenswrapper[4982]: E0224 16:06:48.574422 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" Feb 24 16:06:48 crc kubenswrapper[4982]: I0224 16:06:48.992184 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzls6" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" probeResult="failure" output=< Feb 24 16:06:48 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:06:48 crc kubenswrapper[4982]: > Feb 24 16:06:49 crc kubenswrapper[4982]: E0224 16:06:49.246145 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" Feb 24 16:06:49 crc kubenswrapper[4982]: I0224 16:06:49.327621 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpvpt"] Feb 24 16:06:49 crc kubenswrapper[4982]: W0224 16:06:49.332246 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bd77963_3239_4cc2_b85e_0c36a84b9a71.slice/crio-daeacfc3178298c1156bd16ede0b96144c643d5eb694b4ac6814fc4b6280c047 WatchSource:0}: Error finding container daeacfc3178298c1156bd16ede0b96144c643d5eb694b4ac6814fc4b6280c047: Status 404 returned error can't find the container with id daeacfc3178298c1156bd16ede0b96144c643d5eb694b4ac6814fc4b6280c047 Feb 24 16:06:50 crc kubenswrapper[4982]: I0224 16:06:50.257628 4982 generic.go:334] "Generic (PLEG): container finished" podID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerID="8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964" exitCode=0 Feb 24 16:06:50 crc kubenswrapper[4982]: I0224 16:06:50.257781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpvpt" event={"ID":"3bd77963-3239-4cc2-b85e-0c36a84b9a71","Type":"ContainerDied","Data":"8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964"} Feb 24 16:06:50 crc kubenswrapper[4982]: I0224 16:06:50.258074 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpvpt" event={"ID":"3bd77963-3239-4cc2-b85e-0c36a84b9a71","Type":"ContainerStarted","Data":"daeacfc3178298c1156bd16ede0b96144c643d5eb694b4ac6814fc4b6280c047"} Feb 24 16:06:51 crc kubenswrapper[4982]: I0224 16:06:51.279823 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpvpt" event={"ID":"3bd77963-3239-4cc2-b85e-0c36a84b9a71","Type":"ContainerStarted","Data":"27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28"} Feb 24 16:06:51 crc kubenswrapper[4982]: I0224 16:06:51.652204 4982 scope.go:117] "RemoveContainer" containerID="019dd81c02c07af4791fbc0105d132e235dd02cb2add7f4f093c16ad09286577" Feb 24 16:06:53 crc kubenswrapper[4982]: I0224 16:06:53.313350 4982 generic.go:334] "Generic (PLEG): container finished" podID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerID="27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28" exitCode=0 Feb 24 16:06:53 crc kubenswrapper[4982]: I0224 16:06:53.313438 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpvpt" event={"ID":"3bd77963-3239-4cc2-b85e-0c36a84b9a71","Type":"ContainerDied","Data":"27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28"} Feb 24 16:06:54 crc kubenswrapper[4982]: I0224 16:06:54.334827 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpvpt" event={"ID":"3bd77963-3239-4cc2-b85e-0c36a84b9a71","Type":"ContainerStarted","Data":"ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878"} Feb 24 16:06:54 crc kubenswrapper[4982]: I0224 16:06:54.384167 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fpvpt" podStartSLOduration=21.83343698 podStartE2EDuration="25.384139082s" podCreationTimestamp="2026-02-24 16:06:29 +0000 UTC" firstStartedPulling="2026-02-24 16:06:50.259958494 +0000 UTC m=+4671.879016997" lastFinishedPulling="2026-02-24 16:06:53.810660576 +0000 UTC m=+4675.429719099" observedRunningTime="2026-02-24 16:06:54.369547404 +0000 UTC m=+4675.988605937" watchObservedRunningTime="2026-02-24 16:06:54.384139082 +0000 UTC m=+4676.003197615" Feb 24 16:06:59 crc kubenswrapper[4982]: I0224 16:06:59.338645 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzls6" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" probeResult="failure" output=< Feb 24 16:06:59 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:06:59 crc kubenswrapper[4982]: > Feb 24 16:07:00 crc kubenswrapper[4982]: I0224 16:07:00.145868 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:07:00 crc kubenswrapper[4982]: E0224 16:07:00.146301 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:07:00 crc kubenswrapper[4982]: I0224 16:07:00.164943 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:07:00 crc kubenswrapper[4982]: I0224 16:07:00.165451 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:07:00 crc kubenswrapper[4982]: I0224 16:07:00.229130 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:07:00 crc kubenswrapper[4982]: I0224 16:07:00.580062 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 24 16:07:01 crc kubenswrapper[4982]: I0224 16:07:01.570222 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:07:01 crc kubenswrapper[4982]: I0224 16:07:01.642534 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpvpt"] Feb 24 16:07:03 crc kubenswrapper[4982]: I0224 16:07:03.482278 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9","Type":"ContainerStarted","Data":"03214d5971da6d75c5c45446b2af41004669e5c071695658a55ccdac72250a97"} Feb 24 16:07:03 crc kubenswrapper[4982]: I0224 16:07:03.482426 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fpvpt" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerName="registry-server" containerID="cri-o://ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878" gracePeriod=2 Feb 24 16:07:03 crc kubenswrapper[4982]: I0224 16:07:03.524706 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.847509283 podStartE2EDuration="54.52467721s" podCreationTimestamp="2026-02-24 16:06:09 +0000 UTC" firstStartedPulling="2026-02-24 16:06:11.897312644 +0000 UTC m=+4633.516371127" lastFinishedPulling="2026-02-24 16:07:00.574480521 +0000 UTC m=+4682.193539054" observedRunningTime="2026-02-24 16:07:03.504736816 +0000 UTC m=+4685.123795329" watchObservedRunningTime="2026-02-24 16:07:03.52467721 +0000 UTC m=+4685.143735743" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.020281 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.166877 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-catalog-content\") pod \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.167192 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-utilities\") pod \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.167262 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46zl\" (UniqueName: \"kubernetes.io/projected/3bd77963-3239-4cc2-b85e-0c36a84b9a71-kube-api-access-s46zl\") pod \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\" (UID: \"3bd77963-3239-4cc2-b85e-0c36a84b9a71\") " Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.168146 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-utilities" (OuterVolumeSpecName: "utilities") pod "3bd77963-3239-4cc2-b85e-0c36a84b9a71" (UID: "3bd77963-3239-4cc2-b85e-0c36a84b9a71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.188857 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd77963-3239-4cc2-b85e-0c36a84b9a71-kube-api-access-s46zl" (OuterVolumeSpecName: "kube-api-access-s46zl") pod "3bd77963-3239-4cc2-b85e-0c36a84b9a71" (UID: "3bd77963-3239-4cc2-b85e-0c36a84b9a71"). InnerVolumeSpecName "kube-api-access-s46zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.191955 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bd77963-3239-4cc2-b85e-0c36a84b9a71" (UID: "3bd77963-3239-4cc2-b85e-0c36a84b9a71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.271858 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.271896 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s46zl\" (UniqueName: \"kubernetes.io/projected/3bd77963-3239-4cc2-b85e-0c36a84b9a71-kube-api-access-s46zl\") on node \"crc\" DevicePath \"\"" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.271910 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd77963-3239-4cc2-b85e-0c36a84b9a71-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.497019 4982 generic.go:334] "Generic (PLEG): container finished" podID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerID="ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878" exitCode=0 Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.497061 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpvpt" event={"ID":"3bd77963-3239-4cc2-b85e-0c36a84b9a71","Type":"ContainerDied","Data":"ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878"} Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.497087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fpvpt" event={"ID":"3bd77963-3239-4cc2-b85e-0c36a84b9a71","Type":"ContainerDied","Data":"daeacfc3178298c1156bd16ede0b96144c643d5eb694b4ac6814fc4b6280c047"} Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.497093 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fpvpt" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.497110 4982 scope.go:117] "RemoveContainer" containerID="ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.524661 4982 scope.go:117] "RemoveContainer" containerID="27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.538974 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpvpt"] Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.555860 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fpvpt"] Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.557097 4982 scope.go:117] "RemoveContainer" containerID="8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.610376 4982 scope.go:117] "RemoveContainer" containerID="ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878" Feb 24 16:07:04 crc kubenswrapper[4982]: E0224 16:07:04.610796 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878\": container with ID starting with ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878 not found: ID does not exist" containerID="ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.610841 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878"} err="failed to get container status \"ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878\": rpc error: code = NotFound desc = could not find container \"ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878\": container with ID starting with ceb023c54e038a1dfbb34c553b23f34dd3dd990bd4fec0369de5d91015c40878 not found: ID does not exist" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.610862 4982 scope.go:117] "RemoveContainer" containerID="27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28" Feb 24 16:07:04 crc kubenswrapper[4982]: E0224 16:07:04.611415 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28\": container with ID starting with 27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28 not found: ID does not exist" containerID="27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.611449 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28"} err="failed to get container status \"27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28\": rpc error: code = NotFound desc = could not find container \"27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28\": container with ID starting with 27417bb56b07a6308f1e3d15b86a10edd403ca07e4f332150b29b8a336ecfc28 not found: ID does not exist" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.611469 4982 scope.go:117] "RemoveContainer" containerID="8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964" Feb 24 16:07:04 crc kubenswrapper[4982]: E0224 16:07:04.611814 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964\": container with ID starting with 8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964 not found: ID does not exist" containerID="8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964" Feb 24 16:07:04 crc kubenswrapper[4982]: I0224 16:07:04.611870 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964"} err="failed to get container status \"8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964\": rpc error: code = NotFound desc = could not find container \"8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964\": container with ID starting with 8359d8d51292baa9b963a6c556131f721c2df33ab835c41176be3c9d87ebe964 not found: ID does not exist" Feb 24 16:07:05 crc kubenswrapper[4982]: I0224 16:07:05.160240 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" path="/var/lib/kubelet/pods/3bd77963-3239-4cc2-b85e-0c36a84b9a71/volumes" Feb 24 16:07:08 crc kubenswrapper[4982]: I0224 16:07:08.982595 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzls6" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" probeResult="failure" output=< Feb 24 16:07:08 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:07:08 crc kubenswrapper[4982]: > Feb 24 16:07:13 crc kubenswrapper[4982]: I0224 16:07:13.146799 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:07:13 crc kubenswrapper[4982]: E0224 16:07:13.148238 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:07:17 crc kubenswrapper[4982]: I0224 16:07:17.968464 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:07:18 crc kubenswrapper[4982]: I0224 16:07:18.034872 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:07:18 crc kubenswrapper[4982]: I0224 16:07:18.219433 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzls6"] Feb 24 16:07:19 crc kubenswrapper[4982]: I0224 16:07:19.690911 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gzls6" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" containerID="cri-o://7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b" gracePeriod=2 Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.246317 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.416180 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-utilities\") pod \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.416527 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cpgg\" (UniqueName: \"kubernetes.io/projected/6b1229f1-d8bf-4403-8cc3-b06654574fa7-kube-api-access-7cpgg\") pod \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.416590 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-catalog-content\") pod \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\" (UID: \"6b1229f1-d8bf-4403-8cc3-b06654574fa7\") " Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.418651 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-utilities" (OuterVolumeSpecName: "utilities") pod "6b1229f1-d8bf-4403-8cc3-b06654574fa7" (UID: "6b1229f1-d8bf-4403-8cc3-b06654574fa7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.427651 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1229f1-d8bf-4403-8cc3-b06654574fa7-kube-api-access-7cpgg" (OuterVolumeSpecName: "kube-api-access-7cpgg") pod "6b1229f1-d8bf-4403-8cc3-b06654574fa7" (UID: "6b1229f1-d8bf-4403-8cc3-b06654574fa7"). InnerVolumeSpecName "kube-api-access-7cpgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.520086 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cpgg\" (UniqueName: \"kubernetes.io/projected/6b1229f1-d8bf-4403-8cc3-b06654574fa7-kube-api-access-7cpgg\") on node \"crc\" DevicePath \"\"" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.520131 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.539829 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b1229f1-d8bf-4403-8cc3-b06654574fa7" (UID: "6b1229f1-d8bf-4403-8cc3-b06654574fa7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.622973 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1229f1-d8bf-4403-8cc3-b06654574fa7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.711205 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzls6" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.711226 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzls6" event={"ID":"6b1229f1-d8bf-4403-8cc3-b06654574fa7","Type":"ContainerDied","Data":"7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b"} Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.711384 4982 scope.go:117] "RemoveContainer" containerID="7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.726439 4982 generic.go:334] "Generic (PLEG): container finished" podID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerID="7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b" exitCode=0 Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.726591 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzls6" event={"ID":"6b1229f1-d8bf-4403-8cc3-b06654574fa7","Type":"ContainerDied","Data":"aadf9215a60a511b54bdb302ff29e04c48d6bd5df4dceef6f6d132c44eb720fc"} Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.758299 4982 scope.go:117] "RemoveContainer" containerID="d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.801741 4982 scope.go:117] "RemoveContainer" containerID="6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.806106 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzls6"] Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.819660 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gzls6"] Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.881382 4982 scope.go:117] "RemoveContainer" containerID="7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b" Feb 24 16:07:20 crc kubenswrapper[4982]: E0224 16:07:20.881832 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b\": container with ID starting with 7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b not found: ID does not exist" containerID="7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.881879 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b"} err="failed to get container status \"7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b\": rpc error: code = NotFound desc = could not find container \"7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b\": container with ID starting with 7277cc53330eede30eb11824347de44f3bbad869207ab4629e4993b76e96c39b not found: ID does not exist" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.881907 4982 scope.go:117] "RemoveContainer" containerID="d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f" Feb 24 16:07:20 crc kubenswrapper[4982]: E0224 16:07:20.882320 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f\": container with ID starting with d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f not found: ID does not exist" containerID="d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.882351 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f"} err="failed to get container status \"d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f\": rpc error: code = NotFound desc = could not find container \"d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f\": container with ID starting with d9e30f516dbc410786c16ac80b5806d5c895ed59aafb4e9581ac9df85416f58f not found: ID does not exist" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.882374 4982 scope.go:117] "RemoveContainer" containerID="6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9" Feb 24 16:07:20 crc kubenswrapper[4982]: E0224 16:07:20.882871 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9\": container with ID starting with 6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9 not found: ID does not exist" containerID="6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9" Feb 24 16:07:20 crc kubenswrapper[4982]: I0224 16:07:20.882892 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9"} err="failed to get container status \"6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9\": rpc error: code = NotFound desc = could not find container \"6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9\": container with ID starting with 6ee861fc2acc0cc6efaa1d3a78f503dbf1ba07fe12fe2f5acac6cfe4491099d9 not found: ID does not exist" Feb 24 16:07:21 crc kubenswrapper[4982]: I0224 16:07:21.169468 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" path="/var/lib/kubelet/pods/6b1229f1-d8bf-4403-8cc3-b06654574fa7/volumes" Feb 24 16:07:26 crc kubenswrapper[4982]: I0224 16:07:26.147186 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:07:26 crc kubenswrapper[4982]: E0224 16:07:26.148286 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:07:40 crc kubenswrapper[4982]: I0224 16:07:40.148258 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:07:40 crc kubenswrapper[4982]: E0224 16:07:40.149019 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:07:55 crc kubenswrapper[4982]: I0224 16:07:55.146041 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:07:55 crc kubenswrapper[4982]: E0224 16:07:55.148855 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.248138 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532488-8hkmb"] Feb 24 16:08:00 crc kubenswrapper[4982]: E0224 16:08:00.250987 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="extract-content" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.251014 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="extract-content" Feb 24 16:08:00 crc kubenswrapper[4982]: E0224 16:08:00.251038 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerName="registry-server" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.251047 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerName="registry-server" Feb 24 16:08:00 crc kubenswrapper[4982]: E0224 16:08:00.251067 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerName="extract-utilities" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.251075 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerName="extract-utilities" Feb 24 16:08:00 crc kubenswrapper[4982]: E0224 16:08:00.251150 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.251159 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" Feb 24 16:08:00 crc kubenswrapper[4982]: E0224 16:08:00.251189 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerName="extract-content" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.251200 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerName="extract-content" Feb 24 16:08:00 crc kubenswrapper[4982]: E0224 16:08:00.251223 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="extract-utilities" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.251240 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="extract-utilities" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.252521 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd77963-3239-4cc2-b85e-0c36a84b9a71" containerName="registry-server" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.252562 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1229f1-d8bf-4403-8cc3-b06654574fa7" containerName="registry-server" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.255318 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.264054 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.264273 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.266291 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.318556 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532488-8hkmb"] Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.339880 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q666g\" (UniqueName: \"kubernetes.io/projected/03c3346c-b274-4585-aef3-8b39552f671a-kube-api-access-q666g\") pod \"auto-csr-approver-29532488-8hkmb\" (UID: \"03c3346c-b274-4585-aef3-8b39552f671a\") " pod="openshift-infra/auto-csr-approver-29532488-8hkmb" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.443126 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q666g\" (UniqueName: \"kubernetes.io/projected/03c3346c-b274-4585-aef3-8b39552f671a-kube-api-access-q666g\") pod \"auto-csr-approver-29532488-8hkmb\" (UID: \"03c3346c-b274-4585-aef3-8b39552f671a\") " pod="openshift-infra/auto-csr-approver-29532488-8hkmb" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.474027 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q666g\" (UniqueName: \"kubernetes.io/projected/03c3346c-b274-4585-aef3-8b39552f671a-kube-api-access-q666g\") pod \"auto-csr-approver-29532488-8hkmb\" (UID: \"03c3346c-b274-4585-aef3-8b39552f671a\") " pod="openshift-infra/auto-csr-approver-29532488-8hkmb" Feb 24 16:08:00 crc kubenswrapper[4982]: I0224 16:08:00.599465 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" Feb 24 16:08:01 crc kubenswrapper[4982]: I0224 16:08:01.669218 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532488-8hkmb"] Feb 24 16:08:01 crc kubenswrapper[4982]: W0224 16:08:01.729365 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03c3346c_b274_4585_aef3_8b39552f671a.slice/crio-13265297cddd40095c5701e96abc9a3c6cb225feb45696dc8dc1f215d7505118 WatchSource:0}: Error finding container 13265297cddd40095c5701e96abc9a3c6cb225feb45696dc8dc1f215d7505118: Status 404 returned error can't find the container with id 13265297cddd40095c5701e96abc9a3c6cb225feb45696dc8dc1f215d7505118 Feb 24 16:08:02 crc kubenswrapper[4982]: I0224 16:08:02.210188 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" event={"ID":"03c3346c-b274-4585-aef3-8b39552f671a","Type":"ContainerStarted","Data":"13265297cddd40095c5701e96abc9a3c6cb225feb45696dc8dc1f215d7505118"} Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.725911 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d44t5"] Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.730908 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.745413 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d44t5"] Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.872570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-catalog-content\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.872856 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-utilities\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.873174 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n464m\" (UniqueName: \"kubernetes.io/projected/46773d95-a48a-4b4d-b907-1739025cdc56-kube-api-access-n464m\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.975692 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-catalog-content\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.975767 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-utilities\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.975879 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n464m\" (UniqueName: \"kubernetes.io/projected/46773d95-a48a-4b4d-b907-1739025cdc56-kube-api-access-n464m\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.980524 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-catalog-content\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:03 crc kubenswrapper[4982]: I0224 16:08:03.980695 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-utilities\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:04 crc kubenswrapper[4982]: I0224 16:08:04.009014 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n464m\" (UniqueName: \"kubernetes.io/projected/46773d95-a48a-4b4d-b907-1739025cdc56-kube-api-access-n464m\") pod \"community-operators-d44t5\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:04 crc kubenswrapper[4982]: I0224 16:08:04.074427 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:04 crc kubenswrapper[4982]: I0224 16:08:04.229737 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" event={"ID":"03c3346c-b274-4585-aef3-8b39552f671a","Type":"ContainerStarted","Data":"e862712392a2f7f0dcf536be01ecdc62235688618cb011847dc24f2394260bee"} Feb 24 16:08:04 crc kubenswrapper[4982]: I0224 16:08:04.255012 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" podStartSLOduration=3.125957054 podStartE2EDuration="4.254161293s" podCreationTimestamp="2026-02-24 16:08:00 +0000 UTC" firstStartedPulling="2026-02-24 16:08:01.73666547 +0000 UTC m=+4743.355723973" lastFinishedPulling="2026-02-24 16:08:02.864869719 +0000 UTC m=+4744.483928212" observedRunningTime="2026-02-24 16:08:04.244423038 +0000 UTC m=+4745.863481531" watchObservedRunningTime="2026-02-24 16:08:04.254161293 +0000 UTC m=+4745.873219786" Feb 24 16:08:05 crc kubenswrapper[4982]: I0224 16:08:05.031317 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d44t5"] Feb 24 16:08:05 crc kubenswrapper[4982]: W0224 16:08:05.066419 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46773d95_a48a_4b4d_b907_1739025cdc56.slice/crio-1966100e56db2dd8e0799f297500622eda8f9ec55c88804e6f51f2bbbc3d9dee WatchSource:0}: Error finding container 1966100e56db2dd8e0799f297500622eda8f9ec55c88804e6f51f2bbbc3d9dee: Status 404 returned error can't find the container with id 1966100e56db2dd8e0799f297500622eda8f9ec55c88804e6f51f2bbbc3d9dee Feb 24 16:08:05 crc kubenswrapper[4982]: I0224 16:08:05.241042 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d44t5" event={"ID":"46773d95-a48a-4b4d-b907-1739025cdc56","Type":"ContainerStarted","Data":"1966100e56db2dd8e0799f297500622eda8f9ec55c88804e6f51f2bbbc3d9dee"} Feb 24 16:08:06 crc kubenswrapper[4982]: I0224 16:08:06.255184 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" event={"ID":"03c3346c-b274-4585-aef3-8b39552f671a","Type":"ContainerDied","Data":"e862712392a2f7f0dcf536be01ecdc62235688618cb011847dc24f2394260bee"} Feb 24 16:08:06 crc kubenswrapper[4982]: I0224 16:08:06.259442 4982 generic.go:334] "Generic (PLEG): container finished" podID="03c3346c-b274-4585-aef3-8b39552f671a" containerID="e862712392a2f7f0dcf536be01ecdc62235688618cb011847dc24f2394260bee" exitCode=0 Feb 24 16:08:06 crc kubenswrapper[4982]: I0224 16:08:06.262574 4982 generic.go:334] "Generic (PLEG): container finished" podID="46773d95-a48a-4b4d-b907-1739025cdc56" containerID="5ad6cf32ab16251e101910fbdd13feb0c59b08b6e9ce309185edc4e1d14351c3" exitCode=0 Feb 24 16:08:06 crc kubenswrapper[4982]: I0224 16:08:06.262659 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d44t5" event={"ID":"46773d95-a48a-4b4d-b907-1739025cdc56","Type":"ContainerDied","Data":"5ad6cf32ab16251e101910fbdd13feb0c59b08b6e9ce309185edc4e1d14351c3"} Feb 24 16:08:07 crc kubenswrapper[4982]: I0224 16:08:07.278958 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d44t5" event={"ID":"46773d95-a48a-4b4d-b907-1739025cdc56","Type":"ContainerStarted","Data":"91a1aab36f23ca586ffc78f0a7208fcaae91be9b5051d88d88b0163c4fc0e488"} Feb 24 16:08:07 crc kubenswrapper[4982]: I0224 16:08:07.893644 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" Feb 24 16:08:07 crc kubenswrapper[4982]: I0224 16:08:07.993159 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q666g\" (UniqueName: \"kubernetes.io/projected/03c3346c-b274-4585-aef3-8b39552f671a-kube-api-access-q666g\") pod \"03c3346c-b274-4585-aef3-8b39552f671a\" (UID: \"03c3346c-b274-4585-aef3-8b39552f671a\") " Feb 24 16:08:08 crc kubenswrapper[4982]: I0224 16:08:08.009648 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c3346c-b274-4585-aef3-8b39552f671a-kube-api-access-q666g" (OuterVolumeSpecName: "kube-api-access-q666g") pod "03c3346c-b274-4585-aef3-8b39552f671a" (UID: "03c3346c-b274-4585-aef3-8b39552f671a"). InnerVolumeSpecName "kube-api-access-q666g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:08:08 crc kubenswrapper[4982]: I0224 16:08:08.097146 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q666g\" (UniqueName: \"kubernetes.io/projected/03c3346c-b274-4585-aef3-8b39552f671a-kube-api-access-q666g\") on node \"crc\" DevicePath \"\"" Feb 24 16:08:08 crc kubenswrapper[4982]: I0224 16:08:08.291138 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" event={"ID":"03c3346c-b274-4585-aef3-8b39552f671a","Type":"ContainerDied","Data":"13265297cddd40095c5701e96abc9a3c6cb225feb45696dc8dc1f215d7505118"} Feb 24 16:08:08 crc kubenswrapper[4982]: I0224 16:08:08.291181 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532488-8hkmb" Feb 24 16:08:08 crc kubenswrapper[4982]: I0224 16:08:08.302135 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13265297cddd40095c5701e96abc9a3c6cb225feb45696dc8dc1f215d7505118" Feb 24 16:08:08 crc kubenswrapper[4982]: I0224 16:08:08.582307 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532482-zdzqf"] Feb 24 16:08:08 crc kubenswrapper[4982]: I0224 16:08:08.593635 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532482-zdzqf"] Feb 24 16:08:09 crc kubenswrapper[4982]: I0224 16:08:09.154019 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:08:09 crc kubenswrapper[4982]: E0224 16:08:09.154492 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:08:09 crc kubenswrapper[4982]: I0224 16:08:09.249377 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54424d9-e6c7-41a2-be2a-31be457bffd7" path="/var/lib/kubelet/pods/d54424d9-e6c7-41a2-be2a-31be457bffd7/volumes" Feb 24 16:08:09 crc kubenswrapper[4982]: I0224 16:08:09.311789 4982 generic.go:334] "Generic (PLEG): container finished" podID="46773d95-a48a-4b4d-b907-1739025cdc56" containerID="91a1aab36f23ca586ffc78f0a7208fcaae91be9b5051d88d88b0163c4fc0e488" exitCode=0 Feb 24 16:08:09 crc kubenswrapper[4982]: I0224 16:08:09.311838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d44t5" event={"ID":"46773d95-a48a-4b4d-b907-1739025cdc56","Type":"ContainerDied","Data":"91a1aab36f23ca586ffc78f0a7208fcaae91be9b5051d88d88b0163c4fc0e488"} Feb 24 16:08:11 crc kubenswrapper[4982]: I0224 16:08:11.335516 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d44t5" event={"ID":"46773d95-a48a-4b4d-b907-1739025cdc56","Type":"ContainerStarted","Data":"1a61eb0c4dccbab634397c0dab5b98b4d20934785b8811f94fce7ea82bfef72a"} Feb 24 16:08:11 crc kubenswrapper[4982]: I0224 16:08:11.366032 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d44t5" podStartSLOduration=4.890630585 podStartE2EDuration="8.366009912s" podCreationTimestamp="2026-02-24 16:08:03 +0000 UTC" firstStartedPulling="2026-02-24 16:08:06.265538618 +0000 UTC m=+4747.884597121" lastFinishedPulling="2026-02-24 16:08:09.740917955 +0000 UTC m=+4751.359976448" observedRunningTime="2026-02-24 16:08:11.354473238 +0000 UTC m=+4752.973531731" watchObservedRunningTime="2026-02-24 16:08:11.366009912 +0000 UTC m=+4752.985068405" Feb 24 16:08:14 crc kubenswrapper[4982]: I0224 16:08:14.075486 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:14 crc kubenswrapper[4982]: I0224 16:08:14.076168 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:15 crc kubenswrapper[4982]: I0224 16:08:15.130402 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d44t5" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="registry-server" probeResult="failure" output=< Feb 24 16:08:15 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:08:15 crc kubenswrapper[4982]: > Feb 24 16:08:24 crc kubenswrapper[4982]: I0224 16:08:24.145430 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:08:24 crc kubenswrapper[4982]: E0224 16:08:24.146383 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:08:25 crc kubenswrapper[4982]: I0224 16:08:25.726316 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d44t5" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="registry-server" probeResult="failure" output=< Feb 24 16:08:25 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:08:25 crc kubenswrapper[4982]: > Feb 24 16:08:34 crc kubenswrapper[4982]: I0224 16:08:34.281607 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:34 crc kubenswrapper[4982]: I0224 16:08:34.346803 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:34 crc kubenswrapper[4982]: I0224 16:08:34.900415 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d44t5"] Feb 24 16:08:35 crc kubenswrapper[4982]: I0224 16:08:35.616177 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d44t5" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="registry-server" containerID="cri-o://1a61eb0c4dccbab634397c0dab5b98b4d20934785b8811f94fce7ea82bfef72a" gracePeriod=2 Feb 24 16:08:36 crc kubenswrapper[4982]: I0224 16:08:36.149668 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:08:36 crc kubenswrapper[4982]: E0224 16:08:36.151318 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:08:36 crc kubenswrapper[4982]: I0224 16:08:36.628921 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d44t5" event={"ID":"46773d95-a48a-4b4d-b907-1739025cdc56","Type":"ContainerDied","Data":"1a61eb0c4dccbab634397c0dab5b98b4d20934785b8811f94fce7ea82bfef72a"} Feb 24 16:08:36 crc kubenswrapper[4982]: I0224 16:08:36.628839 4982 generic.go:334] "Generic (PLEG): container finished" podID="46773d95-a48a-4b4d-b907-1739025cdc56" containerID="1a61eb0c4dccbab634397c0dab5b98b4d20934785b8811f94fce7ea82bfef72a" exitCode=0 Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.187553 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.325103 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-catalog-content\") pod \"46773d95-a48a-4b4d-b907-1739025cdc56\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.325275 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n464m\" (UniqueName: \"kubernetes.io/projected/46773d95-a48a-4b4d-b907-1739025cdc56-kube-api-access-n464m\") pod \"46773d95-a48a-4b4d-b907-1739025cdc56\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.325346 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-utilities\") pod \"46773d95-a48a-4b4d-b907-1739025cdc56\" (UID: \"46773d95-a48a-4b4d-b907-1739025cdc56\") " Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.331047 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-utilities" (OuterVolumeSpecName: "utilities") pod "46773d95-a48a-4b4d-b907-1739025cdc56" (UID: "46773d95-a48a-4b4d-b907-1739025cdc56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.370855 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46773d95-a48a-4b4d-b907-1739025cdc56-kube-api-access-n464m" (OuterVolumeSpecName: "kube-api-access-n464m") pod "46773d95-a48a-4b4d-b907-1739025cdc56" (UID: "46773d95-a48a-4b4d-b907-1739025cdc56"). InnerVolumeSpecName "kube-api-access-n464m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.429128 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n464m\" (UniqueName: \"kubernetes.io/projected/46773d95-a48a-4b4d-b907-1739025cdc56-kube-api-access-n464m\") on node \"crc\" DevicePath \"\"" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.429519 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.506711 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46773d95-a48a-4b4d-b907-1739025cdc56" (UID: "46773d95-a48a-4b4d-b907-1739025cdc56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.532368 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46773d95-a48a-4b4d-b907-1739025cdc56-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.641458 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d44t5" event={"ID":"46773d95-a48a-4b4d-b907-1739025cdc56","Type":"ContainerDied","Data":"1966100e56db2dd8e0799f297500622eda8f9ec55c88804e6f51f2bbbc3d9dee"} Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.641529 4982 scope.go:117] "RemoveContainer" containerID="1a61eb0c4dccbab634397c0dab5b98b4d20934785b8811f94fce7ea82bfef72a" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.641545 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d44t5" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.681627 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d44t5"] Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.693951 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d44t5"] Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.697761 4982 scope.go:117] "RemoveContainer" containerID="91a1aab36f23ca586ffc78f0a7208fcaae91be9b5051d88d88b0163c4fc0e488" Feb 24 16:08:37 crc kubenswrapper[4982]: I0224 16:08:37.727041 4982 scope.go:117] "RemoveContainer" containerID="5ad6cf32ab16251e101910fbdd13feb0c59b08b6e9ce309185edc4e1d14351c3" Feb 24 16:08:39 crc kubenswrapper[4982]: I0224 16:08:39.166239 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" path="/var/lib/kubelet/pods/46773d95-a48a-4b4d-b907-1739025cdc56/volumes" Feb 24 16:08:49 crc kubenswrapper[4982]: I0224 16:08:49.160020 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:08:49 crc kubenswrapper[4982]: E0224 16:08:49.163725 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:08:52 crc kubenswrapper[4982]: I0224 16:08:52.010007 4982 scope.go:117] "RemoveContainer" containerID="13a41bdfaec9ac13e9f4aa307b6ac6773b1867914c0f78c611a9af6f3486ed19" Feb 24 16:09:01 crc kubenswrapper[4982]: I0224 16:09:01.146208 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:09:01 crc kubenswrapper[4982]: E0224 16:09:01.147191 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:09:14 crc kubenswrapper[4982]: I0224 16:09:14.146159 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:09:14 crc kubenswrapper[4982]: E0224 16:09:14.147671 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:09:29 crc kubenswrapper[4982]: I0224 16:09:29.175631 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:09:29 crc kubenswrapper[4982]: E0224 16:09:29.184104 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:09:44 crc kubenswrapper[4982]: I0224 16:09:44.146301 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:09:44 crc kubenswrapper[4982]: E0224 16:09:44.146941 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:09:59 crc kubenswrapper[4982]: I0224 16:09:59.155394 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:09:59 crc kubenswrapper[4982]: E0224 16:09:59.156648 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.449656 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532490-fcd6v"] Feb 24 16:10:00 crc kubenswrapper[4982]: E0224 16:10:00.458930 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c3346c-b274-4585-aef3-8b39552f671a" containerName="oc" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.459344 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c3346c-b274-4585-aef3-8b39552f671a" containerName="oc" Feb 24 16:10:00 crc kubenswrapper[4982]: E0224 16:10:00.459740 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="extract-utilities" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.459753 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="extract-utilities" Feb 24 16:10:00 crc kubenswrapper[4982]: E0224 16:10:00.459772 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="registry-server" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.459781 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="registry-server" Feb 24 16:10:00 crc kubenswrapper[4982]: E0224 16:10:00.459791 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="extract-content" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.459797 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="extract-content" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.462095 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="46773d95-a48a-4b4d-b907-1739025cdc56" containerName="registry-server" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.462152 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c3346c-b274-4585-aef3-8b39552f671a" containerName="oc" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.472545 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.486909 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.486922 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.486972 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.522201 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsz4v\" (UniqueName: \"kubernetes.io/projected/a5b28593-fdb3-499b-8bf6-c3710b6f56da-kube-api-access-bsz4v\") pod \"auto-csr-approver-29532490-fcd6v\" (UID: \"a5b28593-fdb3-499b-8bf6-c3710b6f56da\") " pod="openshift-infra/auto-csr-approver-29532490-fcd6v" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.587662 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532490-fcd6v"] Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.629624 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsz4v\" (UniqueName: \"kubernetes.io/projected/a5b28593-fdb3-499b-8bf6-c3710b6f56da-kube-api-access-bsz4v\") pod \"auto-csr-approver-29532490-fcd6v\" (UID: \"a5b28593-fdb3-499b-8bf6-c3710b6f56da\") " pod="openshift-infra/auto-csr-approver-29532490-fcd6v" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.687996 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsz4v\" (UniqueName: \"kubernetes.io/projected/a5b28593-fdb3-499b-8bf6-c3710b6f56da-kube-api-access-bsz4v\") pod \"auto-csr-approver-29532490-fcd6v\" (UID: \"a5b28593-fdb3-499b-8bf6-c3710b6f56da\") " pod="openshift-infra/auto-csr-approver-29532490-fcd6v" Feb 24 16:10:00 crc kubenswrapper[4982]: I0224 16:10:00.838786 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" Feb 24 16:10:02 crc kubenswrapper[4982]: I0224 16:10:02.398374 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532490-fcd6v"] Feb 24 16:10:02 crc kubenswrapper[4982]: I0224 16:10:02.688146 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" event={"ID":"a5b28593-fdb3-499b-8bf6-c3710b6f56da","Type":"ContainerStarted","Data":"0993c837f658675d38abf753306def87bb53738eae253d4fe749336c476c93d4"} Feb 24 16:10:07 crc kubenswrapper[4982]: I0224 16:10:07.752249 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" event={"ID":"a5b28593-fdb3-499b-8bf6-c3710b6f56da","Type":"ContainerStarted","Data":"55b13a006a948b5e0512aeac92aac97480d811497e431a6bdee8059144f88dc2"} Feb 24 16:10:07 crc kubenswrapper[4982]: I0224 16:10:07.780165 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" podStartSLOduration=3.962393348 podStartE2EDuration="7.779184589s" podCreationTimestamp="2026-02-24 16:10:00 +0000 UTC" firstStartedPulling="2026-02-24 16:10:02.441445961 +0000 UTC m=+4864.060504454" lastFinishedPulling="2026-02-24 16:10:06.258237202 +0000 UTC m=+4867.877295695" observedRunningTime="2026-02-24 16:10:07.775426886 +0000 UTC m=+4869.394485379" watchObservedRunningTime="2026-02-24 16:10:07.779184589 +0000 UTC m=+4869.398243092" Feb 24 16:10:08 crc kubenswrapper[4982]: I0224 16:10:08.764435 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" event={"ID":"a5b28593-fdb3-499b-8bf6-c3710b6f56da","Type":"ContainerDied","Data":"55b13a006a948b5e0512aeac92aac97480d811497e431a6bdee8059144f88dc2"} Feb 24 16:10:08 crc kubenswrapper[4982]: I0224 16:10:08.764305 4982 generic.go:334] "Generic (PLEG): container finished" podID="a5b28593-fdb3-499b-8bf6-c3710b6f56da" containerID="55b13a006a948b5e0512aeac92aac97480d811497e431a6bdee8059144f88dc2" exitCode=0 Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.675785 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.705644 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsz4v\" (UniqueName: \"kubernetes.io/projected/a5b28593-fdb3-499b-8bf6-c3710b6f56da-kube-api-access-bsz4v\") pod \"a5b28593-fdb3-499b-8bf6-c3710b6f56da\" (UID: \"a5b28593-fdb3-499b-8bf6-c3710b6f56da\") " Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.727804 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b28593-fdb3-499b-8bf6-c3710b6f56da-kube-api-access-bsz4v" (OuterVolumeSpecName: "kube-api-access-bsz4v") pod "a5b28593-fdb3-499b-8bf6-c3710b6f56da" (UID: "a5b28593-fdb3-499b-8bf6-c3710b6f56da"). InnerVolumeSpecName "kube-api-access-bsz4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.798583 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" event={"ID":"a5b28593-fdb3-499b-8bf6-c3710b6f56da","Type":"ContainerDied","Data":"0993c837f658675d38abf753306def87bb53738eae253d4fe749336c476c93d4"} Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.798635 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0993c837f658675d38abf753306def87bb53738eae253d4fe749336c476c93d4" Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.798763 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532490-fcd6v" Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.811414 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsz4v\" (UniqueName: \"kubernetes.io/projected/a5b28593-fdb3-499b-8bf6-c3710b6f56da-kube-api-access-bsz4v\") on node \"crc\" DevicePath \"\"" Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.892061 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532484-qkgcl"] Feb 24 16:10:10 crc kubenswrapper[4982]: I0224 16:10:10.903342 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532484-qkgcl"] Feb 24 16:10:11 crc kubenswrapper[4982]: I0224 16:10:11.161318 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395e1054-ebf9-4f8d-8937-17ac137f9e30" path="/var/lib/kubelet/pods/395e1054-ebf9-4f8d-8937-17ac137f9e30/volumes" Feb 24 16:10:13 crc kubenswrapper[4982]: I0224 16:10:13.149204 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:10:13 crc kubenswrapper[4982]: E0224 16:10:13.149853 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:10:26 crc kubenswrapper[4982]: I0224 16:10:26.145632 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:10:26 crc kubenswrapper[4982]: E0224 16:10:26.146557 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:10:40 crc kubenswrapper[4982]: I0224 16:10:40.145744 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:10:40 crc kubenswrapper[4982]: E0224 16:10:40.146682 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:10:52 crc kubenswrapper[4982]: I0224 16:10:52.748960 4982 scope.go:117] "RemoveContainer" containerID="e0bdda07973f5d32e4d218e4b6590e3ae2ebdd6f8ecc6e838e2a5c140bddfd07" Feb 24 16:10:54 crc kubenswrapper[4982]: I0224 16:10:54.146434 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:10:54 crc kubenswrapper[4982]: E0224 16:10:54.147432 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:11:06 crc kubenswrapper[4982]: I0224 16:11:06.146159 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:11:06 crc kubenswrapper[4982]: E0224 16:11:06.147171 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:11:18 crc kubenswrapper[4982]: I0224 16:11:18.147023 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:11:18 crc kubenswrapper[4982]: I0224 16:11:18.933255 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"1b246296764e97aee07d79224c4eabadd2af08e8871845fff3a298abd7dff07a"} Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.284112 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532492-vjjnz"] Feb 24 16:12:00 crc kubenswrapper[4982]: E0224 16:12:00.288385 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b28593-fdb3-499b-8bf6-c3710b6f56da" containerName="oc" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.288410 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b28593-fdb3-499b-8bf6-c3710b6f56da" containerName="oc" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.288860 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b28593-fdb3-499b-8bf6-c3710b6f56da" containerName="oc" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.295980 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.306860 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.306868 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.306874 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.374448 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532492-vjjnz"] Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.374509 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzhzf\" (UniqueName: \"kubernetes.io/projected/f33f287d-979b-4e95-a874-e2c9160d32db-kube-api-access-lzhzf\") pod \"auto-csr-approver-29532492-vjjnz\" (UID: \"f33f287d-979b-4e95-a874-e2c9160d32db\") " pod="openshift-infra/auto-csr-approver-29532492-vjjnz" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.476728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzhzf\" (UniqueName: \"kubernetes.io/projected/f33f287d-979b-4e95-a874-e2c9160d32db-kube-api-access-lzhzf\") pod \"auto-csr-approver-29532492-vjjnz\" (UID: \"f33f287d-979b-4e95-a874-e2c9160d32db\") " pod="openshift-infra/auto-csr-approver-29532492-vjjnz" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.512695 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzhzf\" (UniqueName: \"kubernetes.io/projected/f33f287d-979b-4e95-a874-e2c9160d32db-kube-api-access-lzhzf\") pod \"auto-csr-approver-29532492-vjjnz\" (UID: \"f33f287d-979b-4e95-a874-e2c9160d32db\") " pod="openshift-infra/auto-csr-approver-29532492-vjjnz" Feb 24 16:12:00 crc kubenswrapper[4982]: I0224 16:12:00.646145 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" Feb 24 16:12:01 crc kubenswrapper[4982]: I0224 16:12:01.898693 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532492-vjjnz"] Feb 24 16:12:01 crc kubenswrapper[4982]: W0224 16:12:01.898727 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf33f287d_979b_4e95_a874_e2c9160d32db.slice/crio-25951cac812ac29b6e6523a3ccfa35af8daf1afce778220ffe4bc3a4fd30e67a WatchSource:0}: Error finding container 25951cac812ac29b6e6523a3ccfa35af8daf1afce778220ffe4bc3a4fd30e67a: Status 404 returned error can't find the container with id 25951cac812ac29b6e6523a3ccfa35af8daf1afce778220ffe4bc3a4fd30e67a Feb 24 16:12:01 crc kubenswrapper[4982]: I0224 16:12:01.906155 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:12:02 crc kubenswrapper[4982]: I0224 16:12:02.463659 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" event={"ID":"f33f287d-979b-4e95-a874-e2c9160d32db","Type":"ContainerStarted","Data":"25951cac812ac29b6e6523a3ccfa35af8daf1afce778220ffe4bc3a4fd30e67a"} Feb 24 16:12:05 crc kubenswrapper[4982]: I0224 16:12:05.501898 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" event={"ID":"f33f287d-979b-4e95-a874-e2c9160d32db","Type":"ContainerStarted","Data":"4e4fe7ce733ab7c23bfcaf56ddc1ed3539539f14bf5f04ef7bdb87497d198d60"} Feb 24 16:12:05 crc kubenswrapper[4982]: I0224 16:12:05.530060 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" podStartSLOduration=4.516217952 podStartE2EDuration="5.528818728s" podCreationTimestamp="2026-02-24 16:12:00 +0000 UTC" firstStartedPulling="2026-02-24 16:12:01.902841548 +0000 UTC m=+4983.521900051" lastFinishedPulling="2026-02-24 16:12:02.915442334 +0000 UTC m=+4984.534500827" observedRunningTime="2026-02-24 16:12:05.517423898 +0000 UTC m=+4987.136482411" watchObservedRunningTime="2026-02-24 16:12:05.528818728 +0000 UTC m=+4987.147877221" Feb 24 16:12:06 crc kubenswrapper[4982]: I0224 16:12:06.514708 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" event={"ID":"f33f287d-979b-4e95-a874-e2c9160d32db","Type":"ContainerDied","Data":"4e4fe7ce733ab7c23bfcaf56ddc1ed3539539f14bf5f04ef7bdb87497d198d60"} Feb 24 16:12:06 crc kubenswrapper[4982]: I0224 16:12:06.515204 4982 generic.go:334] "Generic (PLEG): container finished" podID="f33f287d-979b-4e95-a874-e2c9160d32db" containerID="4e4fe7ce733ab7c23bfcaf56ddc1ed3539539f14bf5f04ef7bdb87497d198d60" exitCode=0 Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.138275 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.168325 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzhzf\" (UniqueName: \"kubernetes.io/projected/f33f287d-979b-4e95-a874-e2c9160d32db-kube-api-access-lzhzf\") pod \"f33f287d-979b-4e95-a874-e2c9160d32db\" (UID: \"f33f287d-979b-4e95-a874-e2c9160d32db\") " Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.178299 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33f287d-979b-4e95-a874-e2c9160d32db-kube-api-access-lzhzf" (OuterVolumeSpecName: "kube-api-access-lzhzf") pod "f33f287d-979b-4e95-a874-e2c9160d32db" (UID: "f33f287d-979b-4e95-a874-e2c9160d32db"). InnerVolumeSpecName "kube-api-access-lzhzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.271324 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzhzf\" (UniqueName: \"kubernetes.io/projected/f33f287d-979b-4e95-a874-e2c9160d32db-kube-api-access-lzhzf\") on node \"crc\" DevicePath \"\"" Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.554451 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" event={"ID":"f33f287d-979b-4e95-a874-e2c9160d32db","Type":"ContainerDied","Data":"25951cac812ac29b6e6523a3ccfa35af8daf1afce778220ffe4bc3a4fd30e67a"} Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.554650 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532492-vjjnz" Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.555460 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25951cac812ac29b6e6523a3ccfa35af8daf1afce778220ffe4bc3a4fd30e67a" Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.617779 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532486-88k27"] Feb 24 16:12:08 crc kubenswrapper[4982]: I0224 16:12:08.629892 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532486-88k27"] Feb 24 16:12:09 crc kubenswrapper[4982]: I0224 16:12:09.163089 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633787b6-7c71-4eb7-bcce-aaea34ddd03d" path="/var/lib/kubelet/pods/633787b6-7c71-4eb7-bcce-aaea34ddd03d/volumes" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.559908 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dr5dd"] Feb 24 16:12:43 crc kubenswrapper[4982]: E0224 16:12:43.561227 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33f287d-979b-4e95-a874-e2c9160d32db" containerName="oc" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.561249 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33f287d-979b-4e95-a874-e2c9160d32db" containerName="oc" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.561559 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33f287d-979b-4e95-a874-e2c9160d32db" containerName="oc" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.563641 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.577465 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dr5dd"] Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.688404 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-catalog-content\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.688454 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-utilities\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.688626 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq8qt\" (UniqueName: \"kubernetes.io/projected/ac4014f0-7d33-4373-b15d-b425c9e510df-kube-api-access-nq8qt\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.789956 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-catalog-content\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.790017 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-utilities\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.790084 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq8qt\" (UniqueName: \"kubernetes.io/projected/ac4014f0-7d33-4373-b15d-b425c9e510df-kube-api-access-nq8qt\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.791088 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-utilities\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.791648 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-catalog-content\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.816680 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq8qt\" (UniqueName: \"kubernetes.io/projected/ac4014f0-7d33-4373-b15d-b425c9e510df-kube-api-access-nq8qt\") pod \"certified-operators-dr5dd\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:43 crc kubenswrapper[4982]: I0224 16:12:43.900356 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:44 crc kubenswrapper[4982]: I0224 16:12:44.408311 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dr5dd"] Feb 24 16:12:44 crc kubenswrapper[4982]: I0224 16:12:44.974256 4982 generic.go:334] "Generic (PLEG): container finished" podID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerID="d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4" exitCode=0 Feb 24 16:12:44 crc kubenswrapper[4982]: I0224 16:12:44.974370 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr5dd" event={"ID":"ac4014f0-7d33-4373-b15d-b425c9e510df","Type":"ContainerDied","Data":"d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4"} Feb 24 16:12:44 crc kubenswrapper[4982]: I0224 16:12:44.975397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr5dd" event={"ID":"ac4014f0-7d33-4373-b15d-b425c9e510df","Type":"ContainerStarted","Data":"5b2623852d197388095a327de9d820c1210b1517e895e28d4a72abae47e8ee9a"} Feb 24 16:12:46 crc kubenswrapper[4982]: I0224 16:12:46.998384 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr5dd" event={"ID":"ac4014f0-7d33-4373-b15d-b425c9e510df","Type":"ContainerStarted","Data":"57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8"} Feb 24 16:12:49 crc kubenswrapper[4982]: I0224 16:12:49.023873 4982 generic.go:334] "Generic (PLEG): container finished" podID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerID="57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8" exitCode=0 Feb 24 16:12:49 crc kubenswrapper[4982]: I0224 16:12:49.023975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr5dd" event={"ID":"ac4014f0-7d33-4373-b15d-b425c9e510df","Type":"ContainerDied","Data":"57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8"} Feb 24 16:12:50 crc kubenswrapper[4982]: I0224 16:12:50.044407 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr5dd" event={"ID":"ac4014f0-7d33-4373-b15d-b425c9e510df","Type":"ContainerStarted","Data":"30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf"} Feb 24 16:12:50 crc kubenswrapper[4982]: I0224 16:12:50.081056 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dr5dd" podStartSLOduration=2.669549313 podStartE2EDuration="7.081031534s" podCreationTimestamp="2026-02-24 16:12:43 +0000 UTC" firstStartedPulling="2026-02-24 16:12:44.976639192 +0000 UTC m=+5026.595697685" lastFinishedPulling="2026-02-24 16:12:49.388121413 +0000 UTC m=+5031.007179906" observedRunningTime="2026-02-24 16:12:50.070683741 +0000 UTC m=+5031.689742254" watchObservedRunningTime="2026-02-24 16:12:50.081031534 +0000 UTC m=+5031.700090027" Feb 24 16:12:53 crc kubenswrapper[4982]: I0224 16:12:53.203979 4982 scope.go:117] "RemoveContainer" containerID="2d5e049533ab266342b0aa73da90a775123cb47a6f629523ad0e964d55e47c71" Feb 24 16:12:53 crc kubenswrapper[4982]: I0224 16:12:53.901173 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:53 crc kubenswrapper[4982]: I0224 16:12:53.901585 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:12:54 crc kubenswrapper[4982]: I0224 16:12:54.969358 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dr5dd" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="registry-server" probeResult="failure" output=< Feb 24 16:12:54 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:12:54 crc kubenswrapper[4982]: > Feb 24 16:13:03 crc kubenswrapper[4982]: I0224 16:13:03.957104 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:13:04 crc kubenswrapper[4982]: I0224 16:13:04.015883 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:13:04 crc kubenswrapper[4982]: I0224 16:13:04.203542 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dr5dd"] Feb 24 16:13:05 crc kubenswrapper[4982]: I0224 16:13:05.208941 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dr5dd" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="registry-server" containerID="cri-o://30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf" gracePeriod=2 Feb 24 16:13:05 crc kubenswrapper[4982]: I0224 16:13:05.833104 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:13:05 crc kubenswrapper[4982]: I0224 16:13:05.955241 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-catalog-content\") pod \"ac4014f0-7d33-4373-b15d-b425c9e510df\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " Feb 24 16:13:05 crc kubenswrapper[4982]: I0224 16:13:05.955350 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-utilities\") pod \"ac4014f0-7d33-4373-b15d-b425c9e510df\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " Feb 24 16:13:05 crc kubenswrapper[4982]: I0224 16:13:05.955374 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq8qt\" (UniqueName: \"kubernetes.io/projected/ac4014f0-7d33-4373-b15d-b425c9e510df-kube-api-access-nq8qt\") pod \"ac4014f0-7d33-4373-b15d-b425c9e510df\" (UID: \"ac4014f0-7d33-4373-b15d-b425c9e510df\") " Feb 24 16:13:05 crc kubenswrapper[4982]: I0224 16:13:05.956837 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-utilities" (OuterVolumeSpecName: "utilities") pod "ac4014f0-7d33-4373-b15d-b425c9e510df" (UID: "ac4014f0-7d33-4373-b15d-b425c9e510df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:13:05 crc kubenswrapper[4982]: I0224 16:13:05.957944 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.022449 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac4014f0-7d33-4373-b15d-b425c9e510df" (UID: "ac4014f0-7d33-4373-b15d-b425c9e510df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.060408 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac4014f0-7d33-4373-b15d-b425c9e510df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.220082 4982 generic.go:334] "Generic (PLEG): container finished" podID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerID="30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf" exitCode=0 Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.220124 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr5dd" event={"ID":"ac4014f0-7d33-4373-b15d-b425c9e510df","Type":"ContainerDied","Data":"30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf"} Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.220165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr5dd" event={"ID":"ac4014f0-7d33-4373-b15d-b425c9e510df","Type":"ContainerDied","Data":"5b2623852d197388095a327de9d820c1210b1517e895e28d4a72abae47e8ee9a"} Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.220186 4982 scope.go:117] "RemoveContainer" containerID="30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.220217 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dr5dd" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.245066 4982 scope.go:117] "RemoveContainer" containerID="57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.565791 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4014f0-7d33-4373-b15d-b425c9e510df-kube-api-access-nq8qt" (OuterVolumeSpecName: "kube-api-access-nq8qt") pod "ac4014f0-7d33-4373-b15d-b425c9e510df" (UID: "ac4014f0-7d33-4373-b15d-b425c9e510df"). InnerVolumeSpecName "kube-api-access-nq8qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.572265 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq8qt\" (UniqueName: \"kubernetes.io/projected/ac4014f0-7d33-4373-b15d-b425c9e510df-kube-api-access-nq8qt\") on node \"crc\" DevicePath \"\"" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.587812 4982 scope.go:117] "RemoveContainer" containerID="d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.896574 4982 scope.go:117] "RemoveContainer" containerID="30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf" Feb 24 16:13:06 crc kubenswrapper[4982]: E0224 16:13:06.899804 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf\": container with ID starting with 30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf not found: ID does not exist" containerID="30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.900076 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf"} err="failed to get container status \"30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf\": rpc error: code = NotFound desc = could not find container \"30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf\": container with ID starting with 30e8205bbdc253966d191662ca7bda202cf5875c8126a52faada700f7583e8cf not found: ID does not exist" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.900109 4982 scope.go:117] "RemoveContainer" containerID="57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8" Feb 24 16:13:06 crc kubenswrapper[4982]: E0224 16:13:06.901364 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8\": container with ID starting with 57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8 not found: ID does not exist" containerID="57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.901410 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8"} err="failed to get container status \"57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8\": rpc error: code = NotFound desc = could not find container \"57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8\": container with ID starting with 57ec130c0b48eb221e352aa8b589728376a0a57c9eb8e18ed72b1aabe7efc6d8 not found: ID does not exist" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.901438 4982 scope.go:117] "RemoveContainer" containerID="d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4" Feb 24 16:13:06 crc kubenswrapper[4982]: E0224 16:13:06.901762 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4\": container with ID starting with d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4 not found: ID does not exist" containerID="d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.901797 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4"} err="failed to get container status \"d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4\": rpc error: code = NotFound desc = could not find container \"d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4\": container with ID starting with d3fd86ef5641efd160f67e8df4783ba3839fed4f916e22ab6ea9ccbc02064ce4 not found: ID does not exist" Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.953838 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dr5dd"] Feb 24 16:13:06 crc kubenswrapper[4982]: I0224 16:13:06.965679 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dr5dd"] Feb 24 16:13:07 crc kubenswrapper[4982]: I0224 16:13:07.163053 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" path="/var/lib/kubelet/pods/ac4014f0-7d33-4373-b15d-b425c9e510df/volumes" Feb 24 16:13:38 crc kubenswrapper[4982]: I0224 16:13:38.738593 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:13:38 crc kubenswrapper[4982]: I0224 16:13:38.739184 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.170390 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532494-rtrrp"] Feb 24 16:14:00 crc kubenswrapper[4982]: E0224 16:14:00.172610 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="extract-content" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.172653 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="extract-content" Feb 24 16:14:00 crc kubenswrapper[4982]: E0224 16:14:00.172704 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="registry-server" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.172711 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="registry-server" Feb 24 16:14:00 crc kubenswrapper[4982]: E0224 16:14:00.172726 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="extract-utilities" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.172733 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="extract-utilities" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.172981 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4014f0-7d33-4373-b15d-b425c9e510df" containerName="registry-server" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.173838 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532494-rtrrp" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.177974 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.178148 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.179697 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.180074 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532494-rtrrp"] Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.280489 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7r89\" (UniqueName: \"kubernetes.io/projected/6f9442a3-4de9-4824-9ad6-1c362bbc7a76-kube-api-access-x7r89\") pod \"auto-csr-approver-29532494-rtrrp\" (UID: \"6f9442a3-4de9-4824-9ad6-1c362bbc7a76\") " pod="openshift-infra/auto-csr-approver-29532494-rtrrp" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.383834 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7r89\" (UniqueName: \"kubernetes.io/projected/6f9442a3-4de9-4824-9ad6-1c362bbc7a76-kube-api-access-x7r89\") pod \"auto-csr-approver-29532494-rtrrp\" (UID: \"6f9442a3-4de9-4824-9ad6-1c362bbc7a76\") " pod="openshift-infra/auto-csr-approver-29532494-rtrrp" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.405307 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7r89\" (UniqueName: \"kubernetes.io/projected/6f9442a3-4de9-4824-9ad6-1c362bbc7a76-kube-api-access-x7r89\") pod \"auto-csr-approver-29532494-rtrrp\" (UID: \"6f9442a3-4de9-4824-9ad6-1c362bbc7a76\") " pod="openshift-infra/auto-csr-approver-29532494-rtrrp" Feb 24 16:14:00 crc kubenswrapper[4982]: I0224 16:14:00.517532 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532494-rtrrp" Feb 24 16:14:01 crc kubenswrapper[4982]: I0224 16:14:01.013605 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532494-rtrrp"] Feb 24 16:14:01 crc kubenswrapper[4982]: I0224 16:14:01.998320 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532494-rtrrp" event={"ID":"6f9442a3-4de9-4824-9ad6-1c362bbc7a76","Type":"ContainerStarted","Data":"05054d6535ec8e622ecb310fd2ad5fd9d431e4485ce5fee541d44ed9f300edef"} Feb 24 16:14:03 crc kubenswrapper[4982]: I0224 16:14:03.012222 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f9442a3-4de9-4824-9ad6-1c362bbc7a76" containerID="4f9ff9a7c95fc445b74796b640fe73b33c7e51960f1038cd8e044feed7b11786" exitCode=0 Feb 24 16:14:03 crc kubenswrapper[4982]: I0224 16:14:03.012466 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532494-rtrrp" event={"ID":"6f9442a3-4de9-4824-9ad6-1c362bbc7a76","Type":"ContainerDied","Data":"4f9ff9a7c95fc445b74796b640fe73b33c7e51960f1038cd8e044feed7b11786"} Feb 24 16:14:04 crc kubenswrapper[4982]: I0224 16:14:04.590448 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532494-rtrrp" Feb 24 16:14:04 crc kubenswrapper[4982]: I0224 16:14:04.688116 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7r89\" (UniqueName: \"kubernetes.io/projected/6f9442a3-4de9-4824-9ad6-1c362bbc7a76-kube-api-access-x7r89\") pod \"6f9442a3-4de9-4824-9ad6-1c362bbc7a76\" (UID: \"6f9442a3-4de9-4824-9ad6-1c362bbc7a76\") " Feb 24 16:14:04 crc kubenswrapper[4982]: I0224 16:14:04.695671 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9442a3-4de9-4824-9ad6-1c362bbc7a76-kube-api-access-x7r89" (OuterVolumeSpecName: "kube-api-access-x7r89") pod "6f9442a3-4de9-4824-9ad6-1c362bbc7a76" (UID: "6f9442a3-4de9-4824-9ad6-1c362bbc7a76"). InnerVolumeSpecName "kube-api-access-x7r89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:14:04 crc kubenswrapper[4982]: I0224 16:14:04.792393 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7r89\" (UniqueName: \"kubernetes.io/projected/6f9442a3-4de9-4824-9ad6-1c362bbc7a76-kube-api-access-x7r89\") on node \"crc\" DevicePath \"\"" Feb 24 16:14:05 crc kubenswrapper[4982]: I0224 16:14:05.039255 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532494-rtrrp" event={"ID":"6f9442a3-4de9-4824-9ad6-1c362bbc7a76","Type":"ContainerDied","Data":"05054d6535ec8e622ecb310fd2ad5fd9d431e4485ce5fee541d44ed9f300edef"} Feb 24 16:14:05 crc kubenswrapper[4982]: I0224 16:14:05.039558 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05054d6535ec8e622ecb310fd2ad5fd9d431e4485ce5fee541d44ed9f300edef" Feb 24 16:14:05 crc kubenswrapper[4982]: I0224 16:14:05.039321 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532494-rtrrp" Feb 24 16:14:05 crc kubenswrapper[4982]: I0224 16:14:05.690460 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532488-8hkmb"] Feb 24 16:14:05 crc kubenswrapper[4982]: I0224 16:14:05.703946 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532488-8hkmb"] Feb 24 16:14:07 crc kubenswrapper[4982]: I0224 16:14:07.174329 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c3346c-b274-4585-aef3-8b39552f671a" path="/var/lib/kubelet/pods/03c3346c-b274-4585-aef3-8b39552f671a/volumes" Feb 24 16:14:08 crc kubenswrapper[4982]: I0224 16:14:08.738178 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:14:08 crc kubenswrapper[4982]: I0224 16:14:08.738290 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:14:38 crc kubenswrapper[4982]: I0224 16:14:38.738215 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:14:38 crc kubenswrapper[4982]: I0224 16:14:38.738740 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:14:38 crc kubenswrapper[4982]: I0224 16:14:38.738787 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 16:14:38 crc kubenswrapper[4982]: I0224 16:14:38.739733 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b246296764e97aee07d79224c4eabadd2af08e8871845fff3a298abd7dff07a"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 16:14:38 crc kubenswrapper[4982]: I0224 16:14:38.739785 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://1b246296764e97aee07d79224c4eabadd2af08e8871845fff3a298abd7dff07a" gracePeriod=600 Feb 24 16:14:39 crc kubenswrapper[4982]: I0224 16:14:39.477822 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="1b246296764e97aee07d79224c4eabadd2af08e8871845fff3a298abd7dff07a" exitCode=0 Feb 24 16:14:39 crc kubenswrapper[4982]: I0224 16:14:39.477857 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"1b246296764e97aee07d79224c4eabadd2af08e8871845fff3a298abd7dff07a"} Feb 24 16:14:39 crc kubenswrapper[4982]: I0224 16:14:39.478429 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304"} Feb 24 16:14:39 crc kubenswrapper[4982]: I0224 16:14:39.478452 4982 scope.go:117] "RemoveContainer" containerID="87e65ff53ad19b443909211f9271ce0f20c5f8265cbea50f9d3c954b02b197af" Feb 24 16:14:53 crc kubenswrapper[4982]: I0224 16:14:53.380079 4982 scope.go:117] "RemoveContainer" containerID="e862712392a2f7f0dcf536be01ecdc62235688618cb011847dc24f2394260bee" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.155334 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8"] Feb 24 16:15:00 crc kubenswrapper[4982]: E0224 16:15:00.156528 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9442a3-4de9-4824-9ad6-1c362bbc7a76" containerName="oc" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.156546 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9442a3-4de9-4824-9ad6-1c362bbc7a76" containerName="oc" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.156847 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9442a3-4de9-4824-9ad6-1c362bbc7a76" containerName="oc" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.157812 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.161117 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.161319 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.172812 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8"] Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.357888 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-config-volume\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.357970 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-secret-volume\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.358079 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqv9\" (UniqueName: \"kubernetes.io/projected/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-kube-api-access-pmqv9\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.460783 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqv9\" (UniqueName: \"kubernetes.io/projected/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-kube-api-access-pmqv9\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.461149 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-config-volume\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.461216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-secret-volume\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.462172 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-config-volume\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.472156 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-secret-volume\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.484909 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqv9\" (UniqueName: \"kubernetes.io/projected/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-kube-api-access-pmqv9\") pod \"collect-profiles-29532495-bdvp8\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:00 crc kubenswrapper[4982]: I0224 16:15:00.492799 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:01 crc kubenswrapper[4982]: I0224 16:15:01.144543 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8"] Feb 24 16:15:01 crc kubenswrapper[4982]: I0224 16:15:01.745395 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" event={"ID":"11114a96-3cd2-4af4-8bdc-cb2c752c73ac","Type":"ContainerStarted","Data":"16684e3a73b535405713077026b7a424c7bfb69e9847d278f8b867d2ab3baef0"} Feb 24 16:15:01 crc kubenswrapper[4982]: I0224 16:15:01.745796 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" event={"ID":"11114a96-3cd2-4af4-8bdc-cb2c752c73ac","Type":"ContainerStarted","Data":"defc70634ac48debcef3e5be898733830b5984a174e0da928a9d633a7502dccf"} Feb 24 16:15:01 crc kubenswrapper[4982]: I0224 16:15:01.768339 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" podStartSLOduration=1.768320397 podStartE2EDuration="1.768320397s" podCreationTimestamp="2026-02-24 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 16:15:01.761856251 +0000 UTC m=+5163.380914744" watchObservedRunningTime="2026-02-24 16:15:01.768320397 +0000 UTC m=+5163.387378890" Feb 24 16:15:02 crc kubenswrapper[4982]: I0224 16:15:02.770898 4982 generic.go:334] "Generic (PLEG): container finished" podID="11114a96-3cd2-4af4-8bdc-cb2c752c73ac" containerID="16684e3a73b535405713077026b7a424c7bfb69e9847d278f8b867d2ab3baef0" exitCode=0 Feb 24 16:15:02 crc kubenswrapper[4982]: I0224 16:15:02.771239 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" event={"ID":"11114a96-3cd2-4af4-8bdc-cb2c752c73ac","Type":"ContainerDied","Data":"16684e3a73b535405713077026b7a424c7bfb69e9847d278f8b867d2ab3baef0"} Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.266524 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.302276 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmqv9\" (UniqueName: \"kubernetes.io/projected/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-kube-api-access-pmqv9\") pod \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.302454 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-secret-volume\") pod \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.302787 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-config-volume\") pod \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\" (UID: \"11114a96-3cd2-4af4-8bdc-cb2c752c73ac\") " Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.303231 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "11114a96-3cd2-4af4-8bdc-cb2c752c73ac" (UID: "11114a96-3cd2-4af4-8bdc-cb2c752c73ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.303698 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.309432 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11114a96-3cd2-4af4-8bdc-cb2c752c73ac" (UID: "11114a96-3cd2-4af4-8bdc-cb2c752c73ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.310372 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-kube-api-access-pmqv9" (OuterVolumeSpecName: "kube-api-access-pmqv9") pod "11114a96-3cd2-4af4-8bdc-cb2c752c73ac" (UID: "11114a96-3cd2-4af4-8bdc-cb2c752c73ac"). InnerVolumeSpecName "kube-api-access-pmqv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.406466 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmqv9\" (UniqueName: \"kubernetes.io/projected/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-kube-api-access-pmqv9\") on node \"crc\" DevicePath \"\"" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.406534 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11114a96-3cd2-4af4-8bdc-cb2c752c73ac-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.805965 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" event={"ID":"11114a96-3cd2-4af4-8bdc-cb2c752c73ac","Type":"ContainerDied","Data":"defc70634ac48debcef3e5be898733830b5984a174e0da928a9d633a7502dccf"} Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.806010 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532495-bdvp8" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.806034 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="defc70634ac48debcef3e5be898733830b5984a174e0da928a9d633a7502dccf" Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.854642 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f"] Feb 24 16:15:04 crc kubenswrapper[4982]: I0224 16:15:04.865157 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532450-4w99f"] Feb 24 16:15:05 crc kubenswrapper[4982]: I0224 16:15:05.163737 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4" path="/var/lib/kubelet/pods/66fbf6b0-80fa-45ad-85dd-a4e151f6a6e4/volumes" Feb 24 16:15:53 crc kubenswrapper[4982]: I0224 16:15:53.475114 4982 scope.go:117] "RemoveContainer" containerID="a286607d622ec39765b7fe4d0832be9cbb36d19cfdae677171ad8097add1c70a" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.158752 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532496-vtvmq"] Feb 24 16:16:00 crc kubenswrapper[4982]: E0224 16:16:00.160202 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11114a96-3cd2-4af4-8bdc-cb2c752c73ac" containerName="collect-profiles" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.160223 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="11114a96-3cd2-4af4-8bdc-cb2c752c73ac" containerName="collect-profiles" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.160690 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="11114a96-3cd2-4af4-8bdc-cb2c752c73ac" containerName="collect-profiles" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.161718 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.163652 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.164676 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.165443 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.173655 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532496-vtvmq"] Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.294654 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw8fx\" (UniqueName: \"kubernetes.io/projected/b55db493-3b30-4804-a3e6-b1cfbc768196-kube-api-access-hw8fx\") pod \"auto-csr-approver-29532496-vtvmq\" (UID: \"b55db493-3b30-4804-a3e6-b1cfbc768196\") " pod="openshift-infra/auto-csr-approver-29532496-vtvmq" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.397269 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw8fx\" (UniqueName: \"kubernetes.io/projected/b55db493-3b30-4804-a3e6-b1cfbc768196-kube-api-access-hw8fx\") pod \"auto-csr-approver-29532496-vtvmq\" (UID: \"b55db493-3b30-4804-a3e6-b1cfbc768196\") " pod="openshift-infra/auto-csr-approver-29532496-vtvmq" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.419856 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw8fx\" (UniqueName: \"kubernetes.io/projected/b55db493-3b30-4804-a3e6-b1cfbc768196-kube-api-access-hw8fx\") pod \"auto-csr-approver-29532496-vtvmq\" (UID: \"b55db493-3b30-4804-a3e6-b1cfbc768196\") " pod="openshift-infra/auto-csr-approver-29532496-vtvmq" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.496442 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" Feb 24 16:16:00 crc kubenswrapper[4982]: I0224 16:16:00.996681 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532496-vtvmq"] Feb 24 16:16:01 crc kubenswrapper[4982]: I0224 16:16:01.514551 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" event={"ID":"b55db493-3b30-4804-a3e6-b1cfbc768196","Type":"ContainerStarted","Data":"57a87d0ae8b96e08aca3fcc8fc383fe0fcf18104e6fcdf8a4e1e3fcdb25f5ab4"} Feb 24 16:16:02 crc kubenswrapper[4982]: I0224 16:16:02.527675 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" event={"ID":"b55db493-3b30-4804-a3e6-b1cfbc768196","Type":"ContainerStarted","Data":"53f2c29dc4a5476b9a61f83a81488af505221616141440c6da6f16813c5b1e8b"} Feb 24 16:16:02 crc kubenswrapper[4982]: I0224 16:16:02.549198 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" podStartSLOduration=1.691853181 podStartE2EDuration="2.549175938s" podCreationTimestamp="2026-02-24 16:16:00 +0000 UTC" firstStartedPulling="2026-02-24 16:16:00.998780466 +0000 UTC m=+5222.617838959" lastFinishedPulling="2026-02-24 16:16:01.856103223 +0000 UTC m=+5223.475161716" observedRunningTime="2026-02-24 16:16:02.540963854 +0000 UTC m=+5224.160022347" watchObservedRunningTime="2026-02-24 16:16:02.549175938 +0000 UTC m=+5224.168234431" Feb 24 16:16:03 crc kubenswrapper[4982]: I0224 16:16:03.539732 4982 generic.go:334] "Generic (PLEG): container finished" podID="b55db493-3b30-4804-a3e6-b1cfbc768196" containerID="53f2c29dc4a5476b9a61f83a81488af505221616141440c6da6f16813c5b1e8b" exitCode=0 Feb 24 16:16:03 crc kubenswrapper[4982]: I0224 16:16:03.539842 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" event={"ID":"b55db493-3b30-4804-a3e6-b1cfbc768196","Type":"ContainerDied","Data":"53f2c29dc4a5476b9a61f83a81488af505221616141440c6da6f16813c5b1e8b"} Feb 24 16:16:05 crc kubenswrapper[4982]: I0224 16:16:05.569770 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" event={"ID":"b55db493-3b30-4804-a3e6-b1cfbc768196","Type":"ContainerDied","Data":"57a87d0ae8b96e08aca3fcc8fc383fe0fcf18104e6fcdf8a4e1e3fcdb25f5ab4"} Feb 24 16:16:05 crc kubenswrapper[4982]: I0224 16:16:05.570382 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57a87d0ae8b96e08aca3fcc8fc383fe0fcf18104e6fcdf8a4e1e3fcdb25f5ab4" Feb 24 16:16:05 crc kubenswrapper[4982]: I0224 16:16:05.616629 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" Feb 24 16:16:05 crc kubenswrapper[4982]: I0224 16:16:05.641760 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw8fx\" (UniqueName: \"kubernetes.io/projected/b55db493-3b30-4804-a3e6-b1cfbc768196-kube-api-access-hw8fx\") pod \"b55db493-3b30-4804-a3e6-b1cfbc768196\" (UID: \"b55db493-3b30-4804-a3e6-b1cfbc768196\") " Feb 24 16:16:05 crc kubenswrapper[4982]: I0224 16:16:05.649124 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55db493-3b30-4804-a3e6-b1cfbc768196-kube-api-access-hw8fx" (OuterVolumeSpecName: "kube-api-access-hw8fx") pod "b55db493-3b30-4804-a3e6-b1cfbc768196" (UID: "b55db493-3b30-4804-a3e6-b1cfbc768196"). InnerVolumeSpecName "kube-api-access-hw8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:16:05 crc kubenswrapper[4982]: I0224 16:16:05.744627 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw8fx\" (UniqueName: \"kubernetes.io/projected/b55db493-3b30-4804-a3e6-b1cfbc768196-kube-api-access-hw8fx\") on node \"crc\" DevicePath \"\"" Feb 24 16:16:06 crc kubenswrapper[4982]: I0224 16:16:06.580140 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532496-vtvmq" Feb 24 16:16:06 crc kubenswrapper[4982]: I0224 16:16:06.676220 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532490-fcd6v"] Feb 24 16:16:06 crc kubenswrapper[4982]: I0224 16:16:06.691994 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532490-fcd6v"] Feb 24 16:16:07 crc kubenswrapper[4982]: I0224 16:16:07.165910 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b28593-fdb3-499b-8bf6-c3710b6f56da" path="/var/lib/kubelet/pods/a5b28593-fdb3-499b-8bf6-c3710b6f56da/volumes" Feb 24 16:16:15 crc kubenswrapper[4982]: I0224 16:16:15.772426 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rtztb"] Feb 24 16:16:15 crc kubenswrapper[4982]: E0224 16:16:15.782273 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55db493-3b30-4804-a3e6-b1cfbc768196" containerName="oc" Feb 24 16:16:15 crc kubenswrapper[4982]: I0224 16:16:15.782295 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55db493-3b30-4804-a3e6-b1cfbc768196" containerName="oc" Feb 24 16:16:15 crc kubenswrapper[4982]: I0224 16:16:15.782601 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55db493-3b30-4804-a3e6-b1cfbc768196" containerName="oc" Feb 24 16:16:15 crc kubenswrapper[4982]: I0224 16:16:15.785871 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:15 crc kubenswrapper[4982]: I0224 16:16:15.790573 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtztb"] Feb 24 16:16:15 crc kubenswrapper[4982]: I0224 16:16:15.899754 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xz7m\" (UniqueName: \"kubernetes.io/projected/6a88e665-b70e-4f9f-8409-8e8b966e2732-kube-api-access-5xz7m\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:15 crc kubenswrapper[4982]: I0224 16:16:15.899835 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-catalog-content\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:15 crc kubenswrapper[4982]: I0224 16:16:15.900351 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-utilities\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.002263 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-catalog-content\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.002658 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-utilities\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.002835 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xz7m\" (UniqueName: \"kubernetes.io/projected/6a88e665-b70e-4f9f-8409-8e8b966e2732-kube-api-access-5xz7m\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.002923 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-catalog-content\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.003368 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-utilities\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.023839 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xz7m\" (UniqueName: \"kubernetes.io/projected/6a88e665-b70e-4f9f-8409-8e8b966e2732-kube-api-access-5xz7m\") pod \"redhat-operators-rtztb\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.112963 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.611609 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtztb"] Feb 24 16:16:16 crc kubenswrapper[4982]: I0224 16:16:16.749359 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtztb" event={"ID":"6a88e665-b70e-4f9f-8409-8e8b966e2732","Type":"ContainerStarted","Data":"88aad026f4795d87340b349eac49cca48e10f1ed565279f5b921fecc304c48eb"} Feb 24 16:16:17 crc kubenswrapper[4982]: I0224 16:16:17.771235 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerID="f1a2beacda3a07af8e009a44e18effefe104e90653ca1a9673f5c8f2a8fa35bf" exitCode=0 Feb 24 16:16:17 crc kubenswrapper[4982]: I0224 16:16:17.771598 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtztb" event={"ID":"6a88e665-b70e-4f9f-8409-8e8b966e2732","Type":"ContainerDied","Data":"f1a2beacda3a07af8e009a44e18effefe104e90653ca1a9673f5c8f2a8fa35bf"} Feb 24 16:16:19 crc kubenswrapper[4982]: I0224 16:16:19.794999 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtztb" event={"ID":"6a88e665-b70e-4f9f-8409-8e8b966e2732","Type":"ContainerStarted","Data":"ea99432e3d8af60e65f1599c25ab7ecb638a31865ba4cf7fed2e66a5963d057d"} Feb 24 16:16:24 crc kubenswrapper[4982]: I0224 16:16:24.845883 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerID="ea99432e3d8af60e65f1599c25ab7ecb638a31865ba4cf7fed2e66a5963d057d" exitCode=0 Feb 24 16:16:24 crc kubenswrapper[4982]: I0224 16:16:24.845967 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtztb" event={"ID":"6a88e665-b70e-4f9f-8409-8e8b966e2732","Type":"ContainerDied","Data":"ea99432e3d8af60e65f1599c25ab7ecb638a31865ba4cf7fed2e66a5963d057d"} Feb 24 16:16:25 crc kubenswrapper[4982]: I0224 16:16:25.858299 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtztb" event={"ID":"6a88e665-b70e-4f9f-8409-8e8b966e2732","Type":"ContainerStarted","Data":"b4b4674615dd09e8db1aa0f4da54b7f8e26bdf9d6d33140b9b220b4cb42679fe"} Feb 24 16:16:25 crc kubenswrapper[4982]: I0224 16:16:25.881144 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rtztb" podStartSLOduration=3.183690668 podStartE2EDuration="10.881120893s" podCreationTimestamp="2026-02-24 16:16:15 +0000 UTC" firstStartedPulling="2026-02-24 16:16:17.775446655 +0000 UTC m=+5239.394505178" lastFinishedPulling="2026-02-24 16:16:25.47287691 +0000 UTC m=+5247.091935403" observedRunningTime="2026-02-24 16:16:25.875843489 +0000 UTC m=+5247.494901992" watchObservedRunningTime="2026-02-24 16:16:25.881120893 +0000 UTC m=+5247.500179386" Feb 24 16:16:26 crc kubenswrapper[4982]: I0224 16:16:26.114020 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:26 crc kubenswrapper[4982]: I0224 16:16:26.114077 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:16:27 crc kubenswrapper[4982]: I0224 16:16:27.160812 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rtztb" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="registry-server" probeResult="failure" output=< Feb 24 16:16:27 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:16:27 crc kubenswrapper[4982]: > Feb 24 16:16:37 crc kubenswrapper[4982]: I0224 16:16:37.172139 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rtztb" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="registry-server" probeResult="failure" output=< Feb 24 16:16:37 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:16:37 crc kubenswrapper[4982]: > Feb 24 16:16:47 crc kubenswrapper[4982]: I0224 16:16:47.217610 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rtztb" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="registry-server" probeResult="failure" output=< Feb 24 16:16:47 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:16:47 crc kubenswrapper[4982]: > Feb 24 16:16:54 crc kubenswrapper[4982]: I0224 16:16:54.135157 4982 scope.go:117] "RemoveContainer" containerID="55b13a006a948b5e0512aeac92aac97480d811497e431a6bdee8059144f88dc2" Feb 24 16:16:57 crc kubenswrapper[4982]: I0224 16:16:57.165355 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rtztb" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="registry-server" probeResult="failure" output=< Feb 24 16:16:57 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:16:57 crc kubenswrapper[4982]: > Feb 24 16:17:00 crc kubenswrapper[4982]: I0224 16:17:00.965091 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j6hpq"] Feb 24 16:17:00 crc kubenswrapper[4982]: I0224 16:17:00.969102 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:00 crc kubenswrapper[4982]: I0224 16:17:00.976788 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6hpq"] Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.132520 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-utilities\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.132733 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-catalog-content\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.133027 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9jfp\" (UniqueName: \"kubernetes.io/projected/c98a34ab-c279-4b60-9427-309c0464fb9e-kube-api-access-k9jfp\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.235015 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9jfp\" (UniqueName: \"kubernetes.io/projected/c98a34ab-c279-4b60-9427-309c0464fb9e-kube-api-access-k9jfp\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.235411 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-utilities\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.235689 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-catalog-content\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.236058 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-utilities\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.236296 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-catalog-content\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.260602 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9jfp\" (UniqueName: \"kubernetes.io/projected/c98a34ab-c279-4b60-9427-309c0464fb9e-kube-api-access-k9jfp\") pod \"redhat-marketplace-j6hpq\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:01 crc kubenswrapper[4982]: I0224 16:17:01.297849 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:02 crc kubenswrapper[4982]: I0224 16:17:02.208309 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6hpq"] Feb 24 16:17:02 crc kubenswrapper[4982]: I0224 16:17:02.274645 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6hpq" event={"ID":"c98a34ab-c279-4b60-9427-309c0464fb9e","Type":"ContainerStarted","Data":"7ae4570a3bd6d19b2f097764a2d10dc7869be18a799c49db23eb9dc9b3fbdd57"} Feb 24 16:17:03 crc kubenswrapper[4982]: I0224 16:17:03.285817 4982 generic.go:334] "Generic (PLEG): container finished" podID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerID="01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7" exitCode=0 Feb 24 16:17:03 crc kubenswrapper[4982]: I0224 16:17:03.285922 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6hpq" event={"ID":"c98a34ab-c279-4b60-9427-309c0464fb9e","Type":"ContainerDied","Data":"01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7"} Feb 24 16:17:03 crc kubenswrapper[4982]: I0224 16:17:03.291708 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:17:04 crc kubenswrapper[4982]: I0224 16:17:04.299794 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6hpq" event={"ID":"c98a34ab-c279-4b60-9427-309c0464fb9e","Type":"ContainerStarted","Data":"786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f"} Feb 24 16:17:06 crc kubenswrapper[4982]: I0224 16:17:06.178328 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:17:06 crc kubenswrapper[4982]: I0224 16:17:06.231250 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:17:06 crc kubenswrapper[4982]: I0224 16:17:06.322299 4982 generic.go:334] "Generic (PLEG): container finished" podID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerID="786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f" exitCode=0 Feb 24 16:17:06 crc kubenswrapper[4982]: I0224 16:17:06.322376 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6hpq" event={"ID":"c98a34ab-c279-4b60-9427-309c0464fb9e","Type":"ContainerDied","Data":"786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f"} Feb 24 16:17:07 crc kubenswrapper[4982]: I0224 16:17:07.335190 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6hpq" event={"ID":"c98a34ab-c279-4b60-9427-309c0464fb9e","Type":"ContainerStarted","Data":"875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f"} Feb 24 16:17:07 crc kubenswrapper[4982]: I0224 16:17:07.363201 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j6hpq" podStartSLOduration=3.958586649 podStartE2EDuration="7.36317489s" podCreationTimestamp="2026-02-24 16:17:00 +0000 UTC" firstStartedPulling="2026-02-24 16:17:03.290623626 +0000 UTC m=+5284.909682119" lastFinishedPulling="2026-02-24 16:17:06.695211877 +0000 UTC m=+5288.314270360" observedRunningTime="2026-02-24 16:17:07.357906807 +0000 UTC m=+5288.976965310" watchObservedRunningTime="2026-02-24 16:17:07.36317489 +0000 UTC m=+5288.982233403" Feb 24 16:17:08 crc kubenswrapper[4982]: I0224 16:17:08.361915 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtztb"] Feb 24 16:17:08 crc kubenswrapper[4982]: I0224 16:17:08.362461 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rtztb" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="registry-server" containerID="cri-o://b4b4674615dd09e8db1aa0f4da54b7f8e26bdf9d6d33140b9b220b4cb42679fe" gracePeriod=2 Feb 24 16:17:08 crc kubenswrapper[4982]: I0224 16:17:08.738576 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:17:08 crc kubenswrapper[4982]: I0224 16:17:08.738664 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.363541 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerID="b4b4674615dd09e8db1aa0f4da54b7f8e26bdf9d6d33140b9b220b4cb42679fe" exitCode=0 Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.363614 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtztb" event={"ID":"6a88e665-b70e-4f9f-8409-8e8b966e2732","Type":"ContainerDied","Data":"b4b4674615dd09e8db1aa0f4da54b7f8e26bdf9d6d33140b9b220b4cb42679fe"} Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.502590 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.663311 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-catalog-content\") pod \"6a88e665-b70e-4f9f-8409-8e8b966e2732\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.663612 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xz7m\" (UniqueName: \"kubernetes.io/projected/6a88e665-b70e-4f9f-8409-8e8b966e2732-kube-api-access-5xz7m\") pod \"6a88e665-b70e-4f9f-8409-8e8b966e2732\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.663660 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-utilities\") pod \"6a88e665-b70e-4f9f-8409-8e8b966e2732\" (UID: \"6a88e665-b70e-4f9f-8409-8e8b966e2732\") " Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.664865 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-utilities" (OuterVolumeSpecName: "utilities") pod "6a88e665-b70e-4f9f-8409-8e8b966e2732" (UID: "6a88e665-b70e-4f9f-8409-8e8b966e2732"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.703699 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a88e665-b70e-4f9f-8409-8e8b966e2732-kube-api-access-5xz7m" (OuterVolumeSpecName: "kube-api-access-5xz7m") pod "6a88e665-b70e-4f9f-8409-8e8b966e2732" (UID: "6a88e665-b70e-4f9f-8409-8e8b966e2732"). InnerVolumeSpecName "kube-api-access-5xz7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.766419 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xz7m\" (UniqueName: \"kubernetes.io/projected/6a88e665-b70e-4f9f-8409-8e8b966e2732-kube-api-access-5xz7m\") on node \"crc\" DevicePath \"\"" Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.766477 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.834346 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a88e665-b70e-4f9f-8409-8e8b966e2732" (UID: "6a88e665-b70e-4f9f-8409-8e8b966e2732"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:17:09 crc kubenswrapper[4982]: I0224 16:17:09.868633 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a88e665-b70e-4f9f-8409-8e8b966e2732-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:17:10 crc kubenswrapper[4982]: I0224 16:17:10.379913 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtztb" event={"ID":"6a88e665-b70e-4f9f-8409-8e8b966e2732","Type":"ContainerDied","Data":"88aad026f4795d87340b349eac49cca48e10f1ed565279f5b921fecc304c48eb"} Feb 24 16:17:10 crc kubenswrapper[4982]: I0224 16:17:10.379985 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtztb" Feb 24 16:17:10 crc kubenswrapper[4982]: I0224 16:17:10.380186 4982 scope.go:117] "RemoveContainer" containerID="b4b4674615dd09e8db1aa0f4da54b7f8e26bdf9d6d33140b9b220b4cb42679fe" Feb 24 16:17:10 crc kubenswrapper[4982]: I0224 16:17:10.405266 4982 scope.go:117] "RemoveContainer" containerID="ea99432e3d8af60e65f1599c25ab7ecb638a31865ba4cf7fed2e66a5963d057d" Feb 24 16:17:10 crc kubenswrapper[4982]: I0224 16:17:10.427174 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtztb"] Feb 24 16:17:10 crc kubenswrapper[4982]: I0224 16:17:10.443298 4982 scope.go:117] "RemoveContainer" containerID="f1a2beacda3a07af8e009a44e18effefe104e90653ca1a9673f5c8f2a8fa35bf" Feb 24 16:17:10 crc kubenswrapper[4982]: I0224 16:17:10.454447 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rtztb"] Feb 24 16:17:11 crc kubenswrapper[4982]: I0224 16:17:11.159757 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" path="/var/lib/kubelet/pods/6a88e665-b70e-4f9f-8409-8e8b966e2732/volumes" Feb 24 16:17:11 crc kubenswrapper[4982]: I0224 16:17:11.298370 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:11 crc kubenswrapper[4982]: I0224 16:17:11.298703 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:12 crc kubenswrapper[4982]: I0224 16:17:12.367165 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-j6hpq" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="registry-server" probeResult="failure" output=< Feb 24 16:17:12 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:17:12 crc kubenswrapper[4982]: > Feb 24 16:17:21 crc kubenswrapper[4982]: I0224 16:17:21.365958 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:21 crc kubenswrapper[4982]: I0224 16:17:21.434306 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:21 crc kubenswrapper[4982]: I0224 16:17:21.615572 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6hpq"] Feb 24 16:17:22 crc kubenswrapper[4982]: I0224 16:17:22.509002 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j6hpq" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="registry-server" containerID="cri-o://875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f" gracePeriod=2 Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.059064 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.097081 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9jfp\" (UniqueName: \"kubernetes.io/projected/c98a34ab-c279-4b60-9427-309c0464fb9e-kube-api-access-k9jfp\") pod \"c98a34ab-c279-4b60-9427-309c0464fb9e\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.105835 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98a34ab-c279-4b60-9427-309c0464fb9e-kube-api-access-k9jfp" (OuterVolumeSpecName: "kube-api-access-k9jfp") pod "c98a34ab-c279-4b60-9427-309c0464fb9e" (UID: "c98a34ab-c279-4b60-9427-309c0464fb9e"). InnerVolumeSpecName "kube-api-access-k9jfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.199189 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-utilities\") pod \"c98a34ab-c279-4b60-9427-309c0464fb9e\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.199241 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-catalog-content\") pod \"c98a34ab-c279-4b60-9427-309c0464fb9e\" (UID: \"c98a34ab-c279-4b60-9427-309c0464fb9e\") " Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.201122 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-utilities" (OuterVolumeSpecName: "utilities") pod "c98a34ab-c279-4b60-9427-309c0464fb9e" (UID: "c98a34ab-c279-4b60-9427-309c0464fb9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.201682 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9jfp\" (UniqueName: \"kubernetes.io/projected/c98a34ab-c279-4b60-9427-309c0464fb9e-kube-api-access-k9jfp\") on node \"crc\" DevicePath \"\"" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.201765 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.231231 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c98a34ab-c279-4b60-9427-309c0464fb9e" (UID: "c98a34ab-c279-4b60-9427-309c0464fb9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.304574 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c98a34ab-c279-4b60-9427-309c0464fb9e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.531097 4982 generic.go:334] "Generic (PLEG): container finished" podID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerID="875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f" exitCode=0 Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.531165 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6hpq" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.531189 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6hpq" event={"ID":"c98a34ab-c279-4b60-9427-309c0464fb9e","Type":"ContainerDied","Data":"875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f"} Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.531670 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6hpq" event={"ID":"c98a34ab-c279-4b60-9427-309c0464fb9e","Type":"ContainerDied","Data":"7ae4570a3bd6d19b2f097764a2d10dc7869be18a799c49db23eb9dc9b3fbdd57"} Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.531694 4982 scope.go:117] "RemoveContainer" containerID="875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.566165 4982 scope.go:117] "RemoveContainer" containerID="786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.577468 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6hpq"] Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.589273 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6hpq"] Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.602098 4982 scope.go:117] "RemoveContainer" containerID="01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.665775 4982 scope.go:117] "RemoveContainer" containerID="875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f" Feb 24 16:17:23 crc kubenswrapper[4982]: E0224 16:17:23.669817 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f\": container with ID starting with 875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f not found: ID does not exist" containerID="875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.670297 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f"} err="failed to get container status \"875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f\": rpc error: code = NotFound desc = could not find container \"875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f\": container with ID starting with 875ba77be349efcc81575df15dcf6f75a3b3e340391907b025e8183963bee49f not found: ID does not exist" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.670346 4982 scope.go:117] "RemoveContainer" containerID="786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f" Feb 24 16:17:23 crc kubenswrapper[4982]: E0224 16:17:23.670887 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f\": container with ID starting with 786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f not found: ID does not exist" containerID="786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.670924 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f"} err="failed to get container status \"786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f\": rpc error: code = NotFound desc = could not find container \"786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f\": container with ID starting with 786770430d890da21112559a4a6153db8c600850327458f274bc2d51a86aae8f not found: ID does not exist" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.670942 4982 scope.go:117] "RemoveContainer" containerID="01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7" Feb 24 16:17:23 crc kubenswrapper[4982]: E0224 16:17:23.671392 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7\": container with ID starting with 01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7 not found: ID does not exist" containerID="01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7" Feb 24 16:17:23 crc kubenswrapper[4982]: I0224 16:17:23.671445 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7"} err="failed to get container status \"01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7\": rpc error: code = NotFound desc = could not find container \"01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7\": container with ID starting with 01ebeca08f9ca52af0b1f7bd15975587d6ae3cf898cc21499d74315b259184d7 not found: ID does not exist" Feb 24 16:17:25 crc kubenswrapper[4982]: I0224 16:17:25.158130 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" path="/var/lib/kubelet/pods/c98a34ab-c279-4b60-9427-309c0464fb9e/volumes" Feb 24 16:17:38 crc kubenswrapper[4982]: I0224 16:17:38.738357 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:17:38 crc kubenswrapper[4982]: I0224 16:17:38.739074 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.186090 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532498-6jlt4"] Feb 24 16:18:00 crc kubenswrapper[4982]: E0224 16:18:00.187053 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="extract-utilities" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.187067 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="extract-utilities" Feb 24 16:18:00 crc kubenswrapper[4982]: E0224 16:18:00.187084 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="extract-content" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.187090 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="extract-content" Feb 24 16:18:00 crc kubenswrapper[4982]: E0224 16:18:00.187105 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="extract-content" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.187111 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="extract-content" Feb 24 16:18:00 crc kubenswrapper[4982]: E0224 16:18:00.187128 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="registry-server" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.187133 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="registry-server" Feb 24 16:18:00 crc kubenswrapper[4982]: E0224 16:18:00.187146 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="extract-utilities" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.187152 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="extract-utilities" Feb 24 16:18:00 crc kubenswrapper[4982]: E0224 16:18:00.187174 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="registry-server" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.187180 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="registry-server" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.187391 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98a34ab-c279-4b60-9427-309c0464fb9e" containerName="registry-server" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.187413 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a88e665-b70e-4f9f-8409-8e8b966e2732" containerName="registry-server" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.188441 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.192661 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.193001 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.194375 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.213081 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532498-6jlt4"] Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.343324 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k2cj\" (UniqueName: \"kubernetes.io/projected/a0f5edc2-9540-486b-bb4d-c222528670d9-kube-api-access-7k2cj\") pod \"auto-csr-approver-29532498-6jlt4\" (UID: \"a0f5edc2-9540-486b-bb4d-c222528670d9\") " pod="openshift-infra/auto-csr-approver-29532498-6jlt4" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.445089 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k2cj\" (UniqueName: \"kubernetes.io/projected/a0f5edc2-9540-486b-bb4d-c222528670d9-kube-api-access-7k2cj\") pod \"auto-csr-approver-29532498-6jlt4\" (UID: \"a0f5edc2-9540-486b-bb4d-c222528670d9\") " pod="openshift-infra/auto-csr-approver-29532498-6jlt4" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.467359 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k2cj\" (UniqueName: \"kubernetes.io/projected/a0f5edc2-9540-486b-bb4d-c222528670d9-kube-api-access-7k2cj\") pod \"auto-csr-approver-29532498-6jlt4\" (UID: \"a0f5edc2-9540-486b-bb4d-c222528670d9\") " pod="openshift-infra/auto-csr-approver-29532498-6jlt4" Feb 24 16:18:00 crc kubenswrapper[4982]: I0224 16:18:00.511180 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" Feb 24 16:18:01 crc kubenswrapper[4982]: I0224 16:18:01.012236 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532498-6jlt4"] Feb 24 16:18:01 crc kubenswrapper[4982]: W0224 16:18:01.016118 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0f5edc2_9540_486b_bb4d_c222528670d9.slice/crio-edd304eb87c0d262e38c70a5c9e1b8df76628e5c608166a27701d4d3a9655517 WatchSource:0}: Error finding container edd304eb87c0d262e38c70a5c9e1b8df76628e5c608166a27701d4d3a9655517: Status 404 returned error can't find the container with id edd304eb87c0d262e38c70a5c9e1b8df76628e5c608166a27701d4d3a9655517 Feb 24 16:18:01 crc kubenswrapper[4982]: I0224 16:18:01.965143 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" event={"ID":"a0f5edc2-9540-486b-bb4d-c222528670d9","Type":"ContainerStarted","Data":"edd304eb87c0d262e38c70a5c9e1b8df76628e5c608166a27701d4d3a9655517"} Feb 24 16:18:03 crc kubenswrapper[4982]: I0224 16:18:03.998974 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" event={"ID":"a0f5edc2-9540-486b-bb4d-c222528670d9","Type":"ContainerStarted","Data":"4f783b682c642b679331d6128028c6d0fa3e295b411d7c6e422b9b69f531faf8"} Feb 24 16:18:04 crc kubenswrapper[4982]: I0224 16:18:04.029205 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" podStartSLOduration=3.092689963 podStartE2EDuration="4.029182204s" podCreationTimestamp="2026-02-24 16:18:00 +0000 UTC" firstStartedPulling="2026-02-24 16:18:01.017735463 +0000 UTC m=+5342.636793956" lastFinishedPulling="2026-02-24 16:18:01.954227704 +0000 UTC m=+5343.573286197" observedRunningTime="2026-02-24 16:18:04.017812405 +0000 UTC m=+5345.636870908" watchObservedRunningTime="2026-02-24 16:18:04.029182204 +0000 UTC m=+5345.648240707" Feb 24 16:18:05 crc kubenswrapper[4982]: I0224 16:18:05.014932 4982 generic.go:334] "Generic (PLEG): container finished" podID="a0f5edc2-9540-486b-bb4d-c222528670d9" containerID="4f783b682c642b679331d6128028c6d0fa3e295b411d7c6e422b9b69f531faf8" exitCode=0 Feb 24 16:18:05 crc kubenswrapper[4982]: I0224 16:18:05.015048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" event={"ID":"a0f5edc2-9540-486b-bb4d-c222528670d9","Type":"ContainerDied","Data":"4f783b682c642b679331d6128028c6d0fa3e295b411d7c6e422b9b69f531faf8"} Feb 24 16:18:06 crc kubenswrapper[4982]: I0224 16:18:06.461247 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" Feb 24 16:18:06 crc kubenswrapper[4982]: I0224 16:18:06.496090 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k2cj\" (UniqueName: \"kubernetes.io/projected/a0f5edc2-9540-486b-bb4d-c222528670d9-kube-api-access-7k2cj\") pod \"a0f5edc2-9540-486b-bb4d-c222528670d9\" (UID: \"a0f5edc2-9540-486b-bb4d-c222528670d9\") " Feb 24 16:18:06 crc kubenswrapper[4982]: I0224 16:18:06.502196 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f5edc2-9540-486b-bb4d-c222528670d9-kube-api-access-7k2cj" (OuterVolumeSpecName: "kube-api-access-7k2cj") pod "a0f5edc2-9540-486b-bb4d-c222528670d9" (UID: "a0f5edc2-9540-486b-bb4d-c222528670d9"). InnerVolumeSpecName "kube-api-access-7k2cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:18:06 crc kubenswrapper[4982]: I0224 16:18:06.600846 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k2cj\" (UniqueName: \"kubernetes.io/projected/a0f5edc2-9540-486b-bb4d-c222528670d9-kube-api-access-7k2cj\") on node \"crc\" DevicePath \"\"" Feb 24 16:18:07 crc kubenswrapper[4982]: I0224 16:18:07.042440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" event={"ID":"a0f5edc2-9540-486b-bb4d-c222528670d9","Type":"ContainerDied","Data":"edd304eb87c0d262e38c70a5c9e1b8df76628e5c608166a27701d4d3a9655517"} Feb 24 16:18:07 crc kubenswrapper[4982]: I0224 16:18:07.042743 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edd304eb87c0d262e38c70a5c9e1b8df76628e5c608166a27701d4d3a9655517" Feb 24 16:18:07 crc kubenswrapper[4982]: I0224 16:18:07.042645 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532498-6jlt4" Feb 24 16:18:07 crc kubenswrapper[4982]: I0224 16:18:07.093710 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532492-vjjnz"] Feb 24 16:18:07 crc kubenswrapper[4982]: I0224 16:18:07.104050 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532492-vjjnz"] Feb 24 16:18:07 crc kubenswrapper[4982]: I0224 16:18:07.162870 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33f287d-979b-4e95-a874-e2c9160d32db" path="/var/lib/kubelet/pods/f33f287d-979b-4e95-a874-e2c9160d32db/volumes" Feb 24 16:18:08 crc kubenswrapper[4982]: I0224 16:18:08.738648 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:18:08 crc kubenswrapper[4982]: I0224 16:18:08.739031 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:18:08 crc kubenswrapper[4982]: I0224 16:18:08.739086 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 16:18:08 crc kubenswrapper[4982]: I0224 16:18:08.741271 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 16:18:08 crc kubenswrapper[4982]: I0224 16:18:08.741340 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" gracePeriod=600 Feb 24 16:18:08 crc kubenswrapper[4982]: E0224 16:18:08.888797 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:18:09 crc kubenswrapper[4982]: I0224 16:18:09.080796 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" exitCode=0 Feb 24 16:18:09 crc kubenswrapper[4982]: I0224 16:18:09.080844 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304"} Feb 24 16:18:09 crc kubenswrapper[4982]: I0224 16:18:09.080910 4982 scope.go:117] "RemoveContainer" containerID="1b246296764e97aee07d79224c4eabadd2af08e8871845fff3a298abd7dff07a" Feb 24 16:18:09 crc kubenswrapper[4982]: I0224 16:18:09.081732 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:18:09 crc kubenswrapper[4982]: E0224 16:18:09.082093 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:18:24 crc kubenswrapper[4982]: I0224 16:18:24.148139 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:18:24 crc kubenswrapper[4982]: E0224 16:18:24.149708 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:18:35 crc kubenswrapper[4982]: I0224 16:18:35.145343 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:18:35 crc kubenswrapper[4982]: E0224 16:18:35.146213 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:18:50 crc kubenswrapper[4982]: I0224 16:18:50.145255 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:18:50 crc kubenswrapper[4982]: E0224 16:18:50.146078 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:18:54 crc kubenswrapper[4982]: I0224 16:18:54.339219 4982 scope.go:117] "RemoveContainer" containerID="4e4fe7ce733ab7c23bfcaf56ddc1ed3539539f14bf5f04ef7bdb87497d198d60" Feb 24 16:19:04 crc kubenswrapper[4982]: I0224 16:19:04.145783 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:19:04 crc kubenswrapper[4982]: E0224 16:19:04.146764 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:19:17 crc kubenswrapper[4982]: I0224 16:19:17.147685 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:19:17 crc kubenswrapper[4982]: E0224 16:19:17.149138 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:19:30 crc kubenswrapper[4982]: I0224 16:19:30.146532 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:19:30 crc kubenswrapper[4982]: E0224 16:19:30.147464 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.708158 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wtnf2"] Feb 24 16:19:37 crc kubenswrapper[4982]: E0224 16:19:37.711233 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f5edc2-9540-486b-bb4d-c222528670d9" containerName="oc" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.711272 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f5edc2-9540-486b-bb4d-c222528670d9" containerName="oc" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.712375 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f5edc2-9540-486b-bb4d-c222528670d9" containerName="oc" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.716822 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.744765 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtnf2"] Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.767519 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-catalog-content\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.767560 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-utilities\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.767592 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9q84\" (UniqueName: \"kubernetes.io/projected/7c50964b-3cde-4a3e-b1f4-44d7824d4716-kube-api-access-l9q84\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.870125 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-catalog-content\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.870175 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-utilities\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.870213 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9q84\" (UniqueName: \"kubernetes.io/projected/7c50964b-3cde-4a3e-b1f4-44d7824d4716-kube-api-access-l9q84\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.871049 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-utilities\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.871222 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-catalog-content\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:37 crc kubenswrapper[4982]: I0224 16:19:37.893509 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9q84\" (UniqueName: \"kubernetes.io/projected/7c50964b-3cde-4a3e-b1f4-44d7824d4716-kube-api-access-l9q84\") pod \"community-operators-wtnf2\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:38 crc kubenswrapper[4982]: I0224 16:19:38.055628 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:38 crc kubenswrapper[4982]: I0224 16:19:38.644091 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtnf2"] Feb 24 16:19:39 crc kubenswrapper[4982]: I0224 16:19:39.217857 4982 generic.go:334] "Generic (PLEG): container finished" podID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerID="9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1" exitCode=0 Feb 24 16:19:39 crc kubenswrapper[4982]: I0224 16:19:39.217941 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtnf2" event={"ID":"7c50964b-3cde-4a3e-b1f4-44d7824d4716","Type":"ContainerDied","Data":"9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1"} Feb 24 16:19:39 crc kubenswrapper[4982]: I0224 16:19:39.218223 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtnf2" event={"ID":"7c50964b-3cde-4a3e-b1f4-44d7824d4716","Type":"ContainerStarted","Data":"e3fe3a4520305d789c5e5d3b68be374d67324b88def4ec0d6fe511027618afbe"} Feb 24 16:19:41 crc kubenswrapper[4982]: I0224 16:19:41.247882 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtnf2" event={"ID":"7c50964b-3cde-4a3e-b1f4-44d7824d4716","Type":"ContainerStarted","Data":"17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64"} Feb 24 16:19:42 crc kubenswrapper[4982]: I0224 16:19:42.268326 4982 generic.go:334] "Generic (PLEG): container finished" podID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerID="17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64" exitCode=0 Feb 24 16:19:42 crc kubenswrapper[4982]: I0224 16:19:42.268379 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtnf2" event={"ID":"7c50964b-3cde-4a3e-b1f4-44d7824d4716","Type":"ContainerDied","Data":"17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64"} Feb 24 16:19:43 crc kubenswrapper[4982]: I0224 16:19:43.281002 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtnf2" event={"ID":"7c50964b-3cde-4a3e-b1f4-44d7824d4716","Type":"ContainerStarted","Data":"1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8"} Feb 24 16:19:43 crc kubenswrapper[4982]: I0224 16:19:43.303992 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wtnf2" podStartSLOduration=2.772958817 podStartE2EDuration="6.303968931s" podCreationTimestamp="2026-02-24 16:19:37 +0000 UTC" firstStartedPulling="2026-02-24 16:19:39.220330305 +0000 UTC m=+5440.839388818" lastFinishedPulling="2026-02-24 16:19:42.751340409 +0000 UTC m=+5444.370398932" observedRunningTime="2026-02-24 16:19:43.296418866 +0000 UTC m=+5444.915477359" watchObservedRunningTime="2026-02-24 16:19:43.303968931 +0000 UTC m=+5444.923027434" Feb 24 16:19:44 crc kubenswrapper[4982]: I0224 16:19:44.145886 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:19:44 crc kubenswrapper[4982]: E0224 16:19:44.146533 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:19:48 crc kubenswrapper[4982]: I0224 16:19:48.056058 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:48 crc kubenswrapper[4982]: I0224 16:19:48.056587 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:49 crc kubenswrapper[4982]: I0224 16:19:49.117564 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wtnf2" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="registry-server" probeResult="failure" output=< Feb 24 16:19:49 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:19:49 crc kubenswrapper[4982]: > Feb 24 16:19:55 crc kubenswrapper[4982]: I0224 16:19:55.145236 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:19:55 crc kubenswrapper[4982]: E0224 16:19:55.146016 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:19:58 crc kubenswrapper[4982]: I0224 16:19:58.281412 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:58 crc kubenswrapper[4982]: I0224 16:19:58.341800 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:19:58 crc kubenswrapper[4982]: I0224 16:19:58.523567 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtnf2"] Feb 24 16:19:59 crc kubenswrapper[4982]: I0224 16:19:59.475109 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wtnf2" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="registry-server" containerID="cri-o://1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8" gracePeriod=2 Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.043219 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.158841 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532500-v92lg"] Feb 24 16:20:00 crc kubenswrapper[4982]: E0224 16:20:00.159908 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="extract-utilities" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.159930 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="extract-utilities" Feb 24 16:20:00 crc kubenswrapper[4982]: E0224 16:20:00.159958 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="extract-content" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.159966 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="extract-content" Feb 24 16:20:00 crc kubenswrapper[4982]: E0224 16:20:00.159978 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="registry-server" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.159984 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="registry-server" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.160205 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerName="registry-server" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.161083 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532500-v92lg" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.164162 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.164486 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.164692 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.165921 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9q84\" (UniqueName: \"kubernetes.io/projected/7c50964b-3cde-4a3e-b1f4-44d7824d4716-kube-api-access-l9q84\") pod \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.166072 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-utilities\") pod \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.166144 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-catalog-content\") pod \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\" (UID: \"7c50964b-3cde-4a3e-b1f4-44d7824d4716\") " Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.167852 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-utilities" (OuterVolumeSpecName: "utilities") pod "7c50964b-3cde-4a3e-b1f4-44d7824d4716" (UID: "7c50964b-3cde-4a3e-b1f4-44d7824d4716"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.175307 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532500-v92lg"] Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.179661 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c50964b-3cde-4a3e-b1f4-44d7824d4716-kube-api-access-l9q84" (OuterVolumeSpecName: "kube-api-access-l9q84") pod "7c50964b-3cde-4a3e-b1f4-44d7824d4716" (UID: "7c50964b-3cde-4a3e-b1f4-44d7824d4716"). InnerVolumeSpecName "kube-api-access-l9q84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.234534 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c50964b-3cde-4a3e-b1f4-44d7824d4716" (UID: "7c50964b-3cde-4a3e-b1f4-44d7824d4716"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.269543 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkdft\" (UniqueName: \"kubernetes.io/projected/857bf54b-e810-401a-b304-6dae26a64c8a-kube-api-access-dkdft\") pod \"auto-csr-approver-29532500-v92lg\" (UID: \"857bf54b-e810-401a-b304-6dae26a64c8a\") " pod="openshift-infra/auto-csr-approver-29532500-v92lg" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.269764 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9q84\" (UniqueName: \"kubernetes.io/projected/7c50964b-3cde-4a3e-b1f4-44d7824d4716-kube-api-access-l9q84\") on node \"crc\" DevicePath \"\"" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.269777 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.269786 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c50964b-3cde-4a3e-b1f4-44d7824d4716-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.371698 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkdft\" (UniqueName: \"kubernetes.io/projected/857bf54b-e810-401a-b304-6dae26a64c8a-kube-api-access-dkdft\") pod \"auto-csr-approver-29532500-v92lg\" (UID: \"857bf54b-e810-401a-b304-6dae26a64c8a\") " pod="openshift-infra/auto-csr-approver-29532500-v92lg" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.389742 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkdft\" (UniqueName: \"kubernetes.io/projected/857bf54b-e810-401a-b304-6dae26a64c8a-kube-api-access-dkdft\") pod \"auto-csr-approver-29532500-v92lg\" (UID: \"857bf54b-e810-401a-b304-6dae26a64c8a\") " pod="openshift-infra/auto-csr-approver-29532500-v92lg" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.489352 4982 generic.go:334] "Generic (PLEG): container finished" podID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" containerID="1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8" exitCode=0 Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.489396 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtnf2" event={"ID":"7c50964b-3cde-4a3e-b1f4-44d7824d4716","Type":"ContainerDied","Data":"1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8"} Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.489442 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtnf2" event={"ID":"7c50964b-3cde-4a3e-b1f4-44d7824d4716","Type":"ContainerDied","Data":"e3fe3a4520305d789c5e5d3b68be374d67324b88def4ec0d6fe511027618afbe"} Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.489460 4982 scope.go:117] "RemoveContainer" containerID="1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.490111 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtnf2" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.517889 4982 scope.go:117] "RemoveContainer" containerID="17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.533184 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532500-v92lg" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.539750 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtnf2"] Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.550224 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wtnf2"] Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.565348 4982 scope.go:117] "RemoveContainer" containerID="9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.603057 4982 scope.go:117] "RemoveContainer" containerID="1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8" Feb 24 16:20:00 crc kubenswrapper[4982]: E0224 16:20:00.603749 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8\": container with ID starting with 1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8 not found: ID does not exist" containerID="1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.603791 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8"} err="failed to get container status \"1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8\": rpc error: code = NotFound desc = could not find container \"1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8\": container with ID starting with 1d95d25e245f4fe14cdc231e53ded322419be1efd27ca240cda0fba393877ab8 not found: ID does not exist" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.603817 4982 scope.go:117] "RemoveContainer" containerID="17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64" Feb 24 16:20:00 crc kubenswrapper[4982]: E0224 16:20:00.604311 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64\": container with ID starting with 17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64 not found: ID does not exist" containerID="17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.604334 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64"} err="failed to get container status \"17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64\": rpc error: code = NotFound desc = could not find container \"17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64\": container with ID starting with 17ef86f40ef7a7764b77378e0d7976bebaa44fd23bbf69d63c96b7aa68281a64 not found: ID does not exist" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.604350 4982 scope.go:117] "RemoveContainer" containerID="9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1" Feb 24 16:20:00 crc kubenswrapper[4982]: E0224 16:20:00.604768 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1\": container with ID starting with 9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1 not found: ID does not exist" containerID="9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1" Feb 24 16:20:00 crc kubenswrapper[4982]: I0224 16:20:00.604842 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1"} err="failed to get container status \"9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1\": rpc error: code = NotFound desc = could not find container \"9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1\": container with ID starting with 9650ee4954101e86bd67f1cc064668d768e77e40c711db2fda968c0517da04f1 not found: ID does not exist" Feb 24 16:20:01 crc kubenswrapper[4982]: I0224 16:20:01.057791 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532500-v92lg"] Feb 24 16:20:01 crc kubenswrapper[4982]: W0224 16:20:01.062205 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod857bf54b_e810_401a_b304_6dae26a64c8a.slice/crio-76bfa88f7a555420a70f61cdbe3ad3834edcde56f9032cc9014e04998eb1396c WatchSource:0}: Error finding container 76bfa88f7a555420a70f61cdbe3ad3834edcde56f9032cc9014e04998eb1396c: Status 404 returned error can't find the container with id 76bfa88f7a555420a70f61cdbe3ad3834edcde56f9032cc9014e04998eb1396c Feb 24 16:20:01 crc kubenswrapper[4982]: I0224 16:20:01.158081 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c50964b-3cde-4a3e-b1f4-44d7824d4716" path="/var/lib/kubelet/pods/7c50964b-3cde-4a3e-b1f4-44d7824d4716/volumes" Feb 24 16:20:01 crc kubenswrapper[4982]: I0224 16:20:01.503371 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532500-v92lg" event={"ID":"857bf54b-e810-401a-b304-6dae26a64c8a","Type":"ContainerStarted","Data":"76bfa88f7a555420a70f61cdbe3ad3834edcde56f9032cc9014e04998eb1396c"} Feb 24 16:20:03 crc kubenswrapper[4982]: I0224 16:20:03.525911 4982 generic.go:334] "Generic (PLEG): container finished" podID="857bf54b-e810-401a-b304-6dae26a64c8a" containerID="ab15444abce6358a1042a6e36fae73eec4adc2fbcf0545f177af5ff05f2ff0d2" exitCode=0 Feb 24 16:20:03 crc kubenswrapper[4982]: I0224 16:20:03.525979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532500-v92lg" event={"ID":"857bf54b-e810-401a-b304-6dae26a64c8a","Type":"ContainerDied","Data":"ab15444abce6358a1042a6e36fae73eec4adc2fbcf0545f177af5ff05f2ff0d2"} Feb 24 16:20:05 crc kubenswrapper[4982]: I0224 16:20:05.998645 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532500-v92lg" Feb 24 16:20:06 crc kubenswrapper[4982]: I0224 16:20:06.126773 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkdft\" (UniqueName: \"kubernetes.io/projected/857bf54b-e810-401a-b304-6dae26a64c8a-kube-api-access-dkdft\") pod \"857bf54b-e810-401a-b304-6dae26a64c8a\" (UID: \"857bf54b-e810-401a-b304-6dae26a64c8a\") " Feb 24 16:20:06 crc kubenswrapper[4982]: I0224 16:20:06.132028 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857bf54b-e810-401a-b304-6dae26a64c8a-kube-api-access-dkdft" (OuterVolumeSpecName: "kube-api-access-dkdft") pod "857bf54b-e810-401a-b304-6dae26a64c8a" (UID: "857bf54b-e810-401a-b304-6dae26a64c8a"). InnerVolumeSpecName "kube-api-access-dkdft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:20:06 crc kubenswrapper[4982]: I0224 16:20:06.145723 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:20:06 crc kubenswrapper[4982]: E0224 16:20:06.146111 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:20:06 crc kubenswrapper[4982]: I0224 16:20:06.230657 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkdft\" (UniqueName: \"kubernetes.io/projected/857bf54b-e810-401a-b304-6dae26a64c8a-kube-api-access-dkdft\") on node \"crc\" DevicePath \"\"" Feb 24 16:20:06 crc kubenswrapper[4982]: I0224 16:20:06.572290 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532500-v92lg" Feb 24 16:20:06 crc kubenswrapper[4982]: I0224 16:20:06.572345 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532500-v92lg" event={"ID":"857bf54b-e810-401a-b304-6dae26a64c8a","Type":"ContainerDied","Data":"76bfa88f7a555420a70f61cdbe3ad3834edcde56f9032cc9014e04998eb1396c"} Feb 24 16:20:06 crc kubenswrapper[4982]: I0224 16:20:06.573040 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76bfa88f7a555420a70f61cdbe3ad3834edcde56f9032cc9014e04998eb1396c" Feb 24 16:20:07 crc kubenswrapper[4982]: I0224 16:20:07.066224 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532494-rtrrp"] Feb 24 16:20:07 crc kubenswrapper[4982]: I0224 16:20:07.076703 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532494-rtrrp"] Feb 24 16:20:07 crc kubenswrapper[4982]: I0224 16:20:07.157094 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9442a3-4de9-4824-9ad6-1c362bbc7a76" path="/var/lib/kubelet/pods/6f9442a3-4de9-4824-9ad6-1c362bbc7a76/volumes" Feb 24 16:20:17 crc kubenswrapper[4982]: I0224 16:20:17.146593 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:20:17 crc kubenswrapper[4982]: E0224 16:20:17.147537 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:20:29 crc kubenswrapper[4982]: I0224 16:20:29.153232 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:20:29 crc kubenswrapper[4982]: E0224 16:20:29.154268 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:20:44 crc kubenswrapper[4982]: I0224 16:20:44.146025 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:20:44 crc kubenswrapper[4982]: E0224 16:20:44.146921 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:20:54 crc kubenswrapper[4982]: I0224 16:20:54.483174 4982 scope.go:117] "RemoveContainer" containerID="4f9ff9a7c95fc445b74796b640fe73b33c7e51960f1038cd8e044feed7b11786" Feb 24 16:20:56 crc kubenswrapper[4982]: I0224 16:20:56.146112 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:20:56 crc kubenswrapper[4982]: E0224 16:20:56.147011 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:21:11 crc kubenswrapper[4982]: I0224 16:21:11.147327 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:21:11 crc kubenswrapper[4982]: E0224 16:21:11.148418 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:21:26 crc kubenswrapper[4982]: I0224 16:21:26.145389 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:21:26 crc kubenswrapper[4982]: E0224 16:21:26.146345 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:21:40 crc kubenswrapper[4982]: I0224 16:21:40.145675 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:21:40 crc kubenswrapper[4982]: E0224 16:21:40.146706 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:21:53 crc kubenswrapper[4982]: I0224 16:21:53.145936 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:21:53 crc kubenswrapper[4982]: E0224 16:21:53.146755 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.169257 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532502-vskcc"] Feb 24 16:22:00 crc kubenswrapper[4982]: E0224 16:22:00.170620 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857bf54b-e810-401a-b304-6dae26a64c8a" containerName="oc" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.170641 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="857bf54b-e810-401a-b304-6dae26a64c8a" containerName="oc" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.170902 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="857bf54b-e810-401a-b304-6dae26a64c8a" containerName="oc" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.171898 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532502-vskcc" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.181138 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.181249 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.181366 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.196588 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532502-vskcc"] Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.287986 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwwh\" (UniqueName: \"kubernetes.io/projected/0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50-kube-api-access-cpwwh\") pod \"auto-csr-approver-29532502-vskcc\" (UID: \"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50\") " pod="openshift-infra/auto-csr-approver-29532502-vskcc" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.396034 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwwh\" (UniqueName: \"kubernetes.io/projected/0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50-kube-api-access-cpwwh\") pod \"auto-csr-approver-29532502-vskcc\" (UID: \"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50\") " pod="openshift-infra/auto-csr-approver-29532502-vskcc" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.429528 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwwh\" (UniqueName: \"kubernetes.io/projected/0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50-kube-api-access-cpwwh\") pod \"auto-csr-approver-29532502-vskcc\" (UID: \"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50\") " pod="openshift-infra/auto-csr-approver-29532502-vskcc" Feb 24 16:22:00 crc kubenswrapper[4982]: I0224 16:22:00.499685 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532502-vskcc" Feb 24 16:22:01 crc kubenswrapper[4982]: I0224 16:22:01.040362 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532502-vskcc"] Feb 24 16:22:02 crc kubenswrapper[4982]: I0224 16:22:02.003717 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532502-vskcc" event={"ID":"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50","Type":"ContainerStarted","Data":"4fc4706767a3611b7c4649fd3a7ed6fb428ddb3223c4b236220f3e9416f8fd46"} Feb 24 16:22:03 crc kubenswrapper[4982]: I0224 16:22:03.013904 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532502-vskcc" event={"ID":"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50","Type":"ContainerStarted","Data":"a0b116586568ee2280be5e7efabbedafb1fc9a3cf0c24a295e68299a6899eebd"} Feb 24 16:22:03 crc kubenswrapper[4982]: I0224 16:22:03.045994 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532502-vskcc" podStartSLOduration=1.6080697069999998 podStartE2EDuration="3.045968485s" podCreationTimestamp="2026-02-24 16:22:00 +0000 UTC" firstStartedPulling="2026-02-24 16:22:01.047117184 +0000 UTC m=+5582.666175717" lastFinishedPulling="2026-02-24 16:22:02.485015962 +0000 UTC m=+5584.104074495" observedRunningTime="2026-02-24 16:22:03.026827483 +0000 UTC m=+5584.645885996" watchObservedRunningTime="2026-02-24 16:22:03.045968485 +0000 UTC m=+5584.665026968" Feb 24 16:22:04 crc kubenswrapper[4982]: I0224 16:22:04.059190 4982 generic.go:334] "Generic (PLEG): container finished" podID="0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50" containerID="a0b116586568ee2280be5e7efabbedafb1fc9a3cf0c24a295e68299a6899eebd" exitCode=0 Feb 24 16:22:04 crc kubenswrapper[4982]: I0224 16:22:04.059447 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532502-vskcc" event={"ID":"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50","Type":"ContainerDied","Data":"a0b116586568ee2280be5e7efabbedafb1fc9a3cf0c24a295e68299a6899eebd"} Feb 24 16:22:05 crc kubenswrapper[4982]: I0224 16:22:05.462165 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532502-vskcc" Feb 24 16:22:05 crc kubenswrapper[4982]: I0224 16:22:05.535270 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpwwh\" (UniqueName: \"kubernetes.io/projected/0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50-kube-api-access-cpwwh\") pod \"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50\" (UID: \"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50\") " Feb 24 16:22:05 crc kubenswrapper[4982]: I0224 16:22:05.542849 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50-kube-api-access-cpwwh" (OuterVolumeSpecName: "kube-api-access-cpwwh") pod "0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50" (UID: "0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50"). InnerVolumeSpecName "kube-api-access-cpwwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:22:05 crc kubenswrapper[4982]: I0224 16:22:05.638326 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpwwh\" (UniqueName: \"kubernetes.io/projected/0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50-kube-api-access-cpwwh\") on node \"crc\" DevicePath \"\"" Feb 24 16:22:06 crc kubenswrapper[4982]: I0224 16:22:06.104129 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532502-vskcc" event={"ID":"0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50","Type":"ContainerDied","Data":"4fc4706767a3611b7c4649fd3a7ed6fb428ddb3223c4b236220f3e9416f8fd46"} Feb 24 16:22:06 crc kubenswrapper[4982]: I0224 16:22:06.104178 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fc4706767a3611b7c4649fd3a7ed6fb428ddb3223c4b236220f3e9416f8fd46" Feb 24 16:22:06 crc kubenswrapper[4982]: I0224 16:22:06.104275 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532502-vskcc" Feb 24 16:22:06 crc kubenswrapper[4982]: I0224 16:22:06.127724 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532496-vtvmq"] Feb 24 16:22:06 crc kubenswrapper[4982]: I0224 16:22:06.146415 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:22:06 crc kubenswrapper[4982]: E0224 16:22:06.147081 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:22:06 crc kubenswrapper[4982]: I0224 16:22:06.149240 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532496-vtvmq"] Feb 24 16:22:07 crc kubenswrapper[4982]: I0224 16:22:07.164296 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55db493-3b30-4804-a3e6-b1cfbc768196" path="/var/lib/kubelet/pods/b55db493-3b30-4804-a3e6-b1cfbc768196/volumes" Feb 24 16:22:20 crc kubenswrapper[4982]: I0224 16:22:20.145621 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:22:20 crc kubenswrapper[4982]: E0224 16:22:20.146254 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:22:33 crc kubenswrapper[4982]: I0224 16:22:33.147170 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:22:33 crc kubenswrapper[4982]: E0224 16:22:33.150413 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:22:44 crc kubenswrapper[4982]: I0224 16:22:44.146540 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:22:44 crc kubenswrapper[4982]: E0224 16:22:44.147290 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:22:54 crc kubenswrapper[4982]: I0224 16:22:54.615599 4982 scope.go:117] "RemoveContainer" containerID="53f2c29dc4a5476b9a61f83a81488af505221616141440c6da6f16813c5b1e8b" Feb 24 16:22:59 crc kubenswrapper[4982]: I0224 16:22:59.145398 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:22:59 crc kubenswrapper[4982]: E0224 16:22:59.146355 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:23:10 crc kubenswrapper[4982]: I0224 16:23:10.878085 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8g2fn"] Feb 24 16:23:10 crc kubenswrapper[4982]: E0224 16:23:10.880336 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50" containerName="oc" Feb 24 16:23:10 crc kubenswrapper[4982]: I0224 16:23:10.880800 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50" containerName="oc" Feb 24 16:23:10 crc kubenswrapper[4982]: I0224 16:23:10.881596 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50" containerName="oc" Feb 24 16:23:10 crc kubenswrapper[4982]: I0224 16:23:10.884854 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:10 crc kubenswrapper[4982]: I0224 16:23:10.893548 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g2fn"] Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.078731 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-utilities\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.080076 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-catalog-content\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.080412 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8kvn\" (UniqueName: \"kubernetes.io/projected/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-kube-api-access-v8kvn\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.182613 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-catalog-content\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.182820 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8kvn\" (UniqueName: \"kubernetes.io/projected/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-kube-api-access-v8kvn\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.183306 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-utilities\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.183580 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-catalog-content\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.183883 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-utilities\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.364783 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8kvn\" (UniqueName: \"kubernetes.io/projected/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-kube-api-access-v8kvn\") pod \"certified-operators-8g2fn\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:11 crc kubenswrapper[4982]: I0224 16:23:11.521831 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:12 crc kubenswrapper[4982]: I0224 16:23:12.065733 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g2fn"] Feb 24 16:23:12 crc kubenswrapper[4982]: I0224 16:23:12.147174 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:23:13 crc kubenswrapper[4982]: I0224 16:23:13.003013 4982 generic.go:334] "Generic (PLEG): container finished" podID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerID="4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d" exitCode=0 Feb 24 16:23:13 crc kubenswrapper[4982]: I0224 16:23:13.003086 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2fn" event={"ID":"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095","Type":"ContainerDied","Data":"4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d"} Feb 24 16:23:13 crc kubenswrapper[4982]: I0224 16:23:13.003584 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2fn" event={"ID":"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095","Type":"ContainerStarted","Data":"9398a6d95430938b75df0ee9380ccc3111efc4e9782e545b05e97800712b092d"} Feb 24 16:23:13 crc kubenswrapper[4982]: I0224 16:23:13.007041 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:23:13 crc kubenswrapper[4982]: I0224 16:23:13.009675 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"3a897f4a77a414cc436dfc13cb27f56aa83c877cbc34f4775483201dc8bc5633"} Feb 24 16:23:14 crc kubenswrapper[4982]: I0224 16:23:14.022427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2fn" event={"ID":"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095","Type":"ContainerStarted","Data":"359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037"} Feb 24 16:23:16 crc kubenswrapper[4982]: I0224 16:23:16.066744 4982 generic.go:334] "Generic (PLEG): container finished" podID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerID="359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037" exitCode=0 Feb 24 16:23:16 crc kubenswrapper[4982]: I0224 16:23:16.066881 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2fn" event={"ID":"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095","Type":"ContainerDied","Data":"359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037"} Feb 24 16:23:17 crc kubenswrapper[4982]: I0224 16:23:17.091359 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2fn" event={"ID":"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095","Type":"ContainerStarted","Data":"cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09"} Feb 24 16:23:17 crc kubenswrapper[4982]: I0224 16:23:17.129334 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8g2fn" podStartSLOduration=3.676638604 podStartE2EDuration="7.129301146s" podCreationTimestamp="2026-02-24 16:23:10 +0000 UTC" firstStartedPulling="2026-02-24 16:23:13.006733391 +0000 UTC m=+5654.625791894" lastFinishedPulling="2026-02-24 16:23:16.459395913 +0000 UTC m=+5658.078454436" observedRunningTime="2026-02-24 16:23:17.127693313 +0000 UTC m=+5658.746751836" watchObservedRunningTime="2026-02-24 16:23:17.129301146 +0000 UTC m=+5658.748359669" Feb 24 16:23:21 crc kubenswrapper[4982]: I0224 16:23:21.521990 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:21 crc kubenswrapper[4982]: I0224 16:23:21.522946 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:22 crc kubenswrapper[4982]: I0224 16:23:22.592253 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8g2fn" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="registry-server" probeResult="failure" output=< Feb 24 16:23:22 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:23:22 crc kubenswrapper[4982]: > Feb 24 16:23:31 crc kubenswrapper[4982]: I0224 16:23:31.598178 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:31 crc kubenswrapper[4982]: I0224 16:23:31.669228 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:31 crc kubenswrapper[4982]: I0224 16:23:31.844625 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g2fn"] Feb 24 16:23:33 crc kubenswrapper[4982]: I0224 16:23:33.319350 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8g2fn" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="registry-server" containerID="cri-o://cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09" gracePeriod=2 Feb 24 16:23:33 crc kubenswrapper[4982]: I0224 16:23:33.954101 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.050742 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-utilities\") pod \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.051433 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8kvn\" (UniqueName: \"kubernetes.io/projected/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-kube-api-access-v8kvn\") pod \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.051597 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-catalog-content\") pod \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\" (UID: \"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095\") " Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.051884 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-utilities" (OuterVolumeSpecName: "utilities") pod "8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" (UID: "8ac6d0f5-34d6-4520-9cc8-c5680b9a6095"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.053158 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.061360 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-kube-api-access-v8kvn" (OuterVolumeSpecName: "kube-api-access-v8kvn") pod "8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" (UID: "8ac6d0f5-34d6-4520-9cc8-c5680b9a6095"). InnerVolumeSpecName "kube-api-access-v8kvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.120793 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" (UID: "8ac6d0f5-34d6-4520-9cc8-c5680b9a6095"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.156296 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8kvn\" (UniqueName: \"kubernetes.io/projected/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-kube-api-access-v8kvn\") on node \"crc\" DevicePath \"\"" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.156338 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.334132 4982 generic.go:334] "Generic (PLEG): container finished" podID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerID="cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09" exitCode=0 Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.334169 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2fn" event={"ID":"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095","Type":"ContainerDied","Data":"cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09"} Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.334205 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2fn" event={"ID":"8ac6d0f5-34d6-4520-9cc8-c5680b9a6095","Type":"ContainerDied","Data":"9398a6d95430938b75df0ee9380ccc3111efc4e9782e545b05e97800712b092d"} Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.334227 4982 scope.go:117] "RemoveContainer" containerID="cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.334268 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2fn" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.405709 4982 scope.go:117] "RemoveContainer" containerID="359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.410156 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g2fn"] Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.432852 4982 scope.go:117] "RemoveContainer" containerID="4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.435484 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8g2fn"] Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.505126 4982 scope.go:117] "RemoveContainer" containerID="cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09" Feb 24 16:23:34 crc kubenswrapper[4982]: E0224 16:23:34.505810 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09\": container with ID starting with cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09 not found: ID does not exist" containerID="cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.505883 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09"} err="failed to get container status \"cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09\": rpc error: code = NotFound desc = could not find container \"cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09\": container with ID starting with cfd1ca2bb35e390a776f7ac570a87b30a6ca08c27b7edef7e4f846819b439e09 not found: ID does not exist" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.505922 4982 scope.go:117] "RemoveContainer" containerID="359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037" Feb 24 16:23:34 crc kubenswrapper[4982]: E0224 16:23:34.506359 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037\": container with ID starting with 359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037 not found: ID does not exist" containerID="359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.506398 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037"} err="failed to get container status \"359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037\": rpc error: code = NotFound desc = could not find container \"359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037\": container with ID starting with 359ef72db7ab66931c31bb432000a4636ca680324b0730c536f45157cd8a7037 not found: ID does not exist" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.506426 4982 scope.go:117] "RemoveContainer" containerID="4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d" Feb 24 16:23:34 crc kubenswrapper[4982]: E0224 16:23:34.506797 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d\": container with ID starting with 4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d not found: ID does not exist" containerID="4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d" Feb 24 16:23:34 crc kubenswrapper[4982]: I0224 16:23:34.506853 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d"} err="failed to get container status \"4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d\": rpc error: code = NotFound desc = could not find container \"4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d\": container with ID starting with 4c8b65d29da2799e62c9798faffc8548f50b958b6e54fc8a907a1a749707aa9d not found: ID does not exist" Feb 24 16:23:35 crc kubenswrapper[4982]: I0224 16:23:35.160942 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" path="/var/lib/kubelet/pods/8ac6d0f5-34d6-4520-9cc8-c5680b9a6095/volumes" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.174068 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532504-xrvjc"] Feb 24 16:24:00 crc kubenswrapper[4982]: E0224 16:24:00.176227 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="registry-server" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.176268 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="registry-server" Feb 24 16:24:00 crc kubenswrapper[4982]: E0224 16:24:00.176325 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="extract-utilities" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.176338 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="extract-utilities" Feb 24 16:24:00 crc kubenswrapper[4982]: E0224 16:24:00.176397 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="extract-content" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.176419 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="extract-content" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.176903 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac6d0f5-34d6-4520-9cc8-c5680b9a6095" containerName="registry-server" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.178763 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.182173 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.182672 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.184112 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.193960 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532504-xrvjc"] Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.248134 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbj6\" (UniqueName: \"kubernetes.io/projected/1033e866-9d20-40b0-aad6-f651e6ae99a4-kube-api-access-kpbj6\") pod \"auto-csr-approver-29532504-xrvjc\" (UID: \"1033e866-9d20-40b0-aad6-f651e6ae99a4\") " pod="openshift-infra/auto-csr-approver-29532504-xrvjc" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.351619 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbj6\" (UniqueName: \"kubernetes.io/projected/1033e866-9d20-40b0-aad6-f651e6ae99a4-kube-api-access-kpbj6\") pod \"auto-csr-approver-29532504-xrvjc\" (UID: \"1033e866-9d20-40b0-aad6-f651e6ae99a4\") " pod="openshift-infra/auto-csr-approver-29532504-xrvjc" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.378094 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbj6\" (UniqueName: \"kubernetes.io/projected/1033e866-9d20-40b0-aad6-f651e6ae99a4-kube-api-access-kpbj6\") pod \"auto-csr-approver-29532504-xrvjc\" (UID: \"1033e866-9d20-40b0-aad6-f651e6ae99a4\") " pod="openshift-infra/auto-csr-approver-29532504-xrvjc" Feb 24 16:24:00 crc kubenswrapper[4982]: I0224 16:24:00.505902 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" Feb 24 16:24:01 crc kubenswrapper[4982]: I0224 16:24:01.089607 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532504-xrvjc"] Feb 24 16:24:01 crc kubenswrapper[4982]: I0224 16:24:01.732028 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" event={"ID":"1033e866-9d20-40b0-aad6-f651e6ae99a4","Type":"ContainerStarted","Data":"e290a2d80c052ecd941d4f497ca027ed2938ed4f2585bf09b0673b6082b4bd93"} Feb 24 16:24:02 crc kubenswrapper[4982]: I0224 16:24:02.754062 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" event={"ID":"1033e866-9d20-40b0-aad6-f651e6ae99a4","Type":"ContainerStarted","Data":"512a58df20830cc22cc2362c39e5bb833a9db65c151383ec9bef3340fac39a4d"} Feb 24 16:24:02 crc kubenswrapper[4982]: I0224 16:24:02.782381 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" podStartSLOduration=1.6802023130000001 podStartE2EDuration="2.78235274s" podCreationTimestamp="2026-02-24 16:24:00 +0000 UTC" firstStartedPulling="2026-02-24 16:24:01.111297034 +0000 UTC m=+5702.730355537" lastFinishedPulling="2026-02-24 16:24:02.213447441 +0000 UTC m=+5703.832505964" observedRunningTime="2026-02-24 16:24:02.772106591 +0000 UTC m=+5704.391165094" watchObservedRunningTime="2026-02-24 16:24:02.78235274 +0000 UTC m=+5704.401411233" Feb 24 16:24:03 crc kubenswrapper[4982]: I0224 16:24:03.769234 4982 generic.go:334] "Generic (PLEG): container finished" podID="1033e866-9d20-40b0-aad6-f651e6ae99a4" containerID="512a58df20830cc22cc2362c39e5bb833a9db65c151383ec9bef3340fac39a4d" exitCode=0 Feb 24 16:24:03 crc kubenswrapper[4982]: I0224 16:24:03.769367 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" event={"ID":"1033e866-9d20-40b0-aad6-f651e6ae99a4","Type":"ContainerDied","Data":"512a58df20830cc22cc2362c39e5bb833a9db65c151383ec9bef3340fac39a4d"} Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.288017 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.402412 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpbj6\" (UniqueName: \"kubernetes.io/projected/1033e866-9d20-40b0-aad6-f651e6ae99a4-kube-api-access-kpbj6\") pod \"1033e866-9d20-40b0-aad6-f651e6ae99a4\" (UID: \"1033e866-9d20-40b0-aad6-f651e6ae99a4\") " Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.410901 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1033e866-9d20-40b0-aad6-f651e6ae99a4-kube-api-access-kpbj6" (OuterVolumeSpecName: "kube-api-access-kpbj6") pod "1033e866-9d20-40b0-aad6-f651e6ae99a4" (UID: "1033e866-9d20-40b0-aad6-f651e6ae99a4"). InnerVolumeSpecName "kube-api-access-kpbj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.505937 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpbj6\" (UniqueName: \"kubernetes.io/projected/1033e866-9d20-40b0-aad6-f651e6ae99a4-kube-api-access-kpbj6\") on node \"crc\" DevicePath \"\"" Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.800490 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" event={"ID":"1033e866-9d20-40b0-aad6-f651e6ae99a4","Type":"ContainerDied","Data":"e290a2d80c052ecd941d4f497ca027ed2938ed4f2585bf09b0673b6082b4bd93"} Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.800558 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e290a2d80c052ecd941d4f497ca027ed2938ed4f2585bf09b0673b6082b4bd93" Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.800556 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532504-xrvjc" Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.867709 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532498-6jlt4"] Feb 24 16:24:05 crc kubenswrapper[4982]: I0224 16:24:05.877601 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532498-6jlt4"] Feb 24 16:24:07 crc kubenswrapper[4982]: I0224 16:24:07.167208 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0f5edc2-9540-486b-bb4d-c222528670d9" path="/var/lib/kubelet/pods/a0f5edc2-9540-486b-bb4d-c222528670d9/volumes" Feb 24 16:24:54 crc kubenswrapper[4982]: I0224 16:24:54.838819 4982 scope.go:117] "RemoveContainer" containerID="4f783b682c642b679331d6128028c6d0fa3e295b411d7c6e422b9b69f531faf8" Feb 24 16:25:09 crc kubenswrapper[4982]: I0224 16:25:09.675323 4982 generic.go:334] "Generic (PLEG): container finished" podID="2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" containerID="03214d5971da6d75c5c45446b2af41004669e5c071695658a55ccdac72250a97" exitCode=0 Feb 24 16:25:09 crc kubenswrapper[4982]: I0224 16:25:09.675453 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9","Type":"ContainerDied","Data":"03214d5971da6d75c5c45446b2af41004669e5c071695658a55ccdac72250a97"} Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.082149 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.186226 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config-secret\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.186531 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbgh8\" (UniqueName: \"kubernetes.io/projected/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-kube-api-access-zbgh8\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.186568 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.186627 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-config-data\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.186653 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ssh-key\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.186753 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-temporary\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.186860 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-workdir\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.186882 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.187013 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ca-certs\") pod \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\" (UID: \"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9\") " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.187491 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.188032 4982 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.188389 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-config-data" (OuterVolumeSpecName: "config-data") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.194842 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.196265 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.204668 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-kube-api-access-zbgh8" (OuterVolumeSpecName: "kube-api-access-zbgh8") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "kube-api-access-zbgh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.219850 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.221047 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.222420 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.243850 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" (UID: "2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.289899 4982 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.290579 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.290596 4982 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.290607 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.290616 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbgh8\" (UniqueName: \"kubernetes.io/projected/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-kube-api-access-zbgh8\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.290625 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.290633 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.290641 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.319358 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.392141 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.700056 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9","Type":"ContainerDied","Data":"8fd1b058558cce835df51c3f72fc22c11ec4a66cb38c9aa85cd1282912b0b78c"} Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.700132 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fd1b058558cce835df51c3f72fc22c11ec4a66cb38c9aa85cd1282912b0b78c" Feb 24 16:25:11 crc kubenswrapper[4982]: I0224 16:25:11.700242 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.303055 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 24 16:25:15 crc kubenswrapper[4982]: E0224 16:25:15.304110 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1033e866-9d20-40b0-aad6-f651e6ae99a4" containerName="oc" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.304123 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1033e866-9d20-40b0-aad6-f651e6ae99a4" containerName="oc" Feb 24 16:25:15 crc kubenswrapper[4982]: E0224 16:25:15.304160 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" containerName="tempest-tests-tempest-tests-runner" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.304166 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" containerName="tempest-tests-tempest-tests-runner" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.304391 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1033e866-9d20-40b0-aad6-f651e6ae99a4" containerName="oc" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.304415 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9" containerName="tempest-tests-tempest-tests-runner" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.305173 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.311537 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fh5tb" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.327346 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.411753 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5z8j\" (UniqueName: \"kubernetes.io/projected/92ce340a-a0e5-4ab6-984a-17b5b02bfa0a-kube-api-access-c5z8j\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.411951 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.514104 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5z8j\" (UniqueName: \"kubernetes.io/projected/92ce340a-a0e5-4ab6-984a-17b5b02bfa0a-kube-api-access-c5z8j\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.514235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.516467 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.535020 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5z8j\" (UniqueName: \"kubernetes.io/projected/92ce340a-a0e5-4ab6-984a-17b5b02bfa0a-kube-api-access-c5z8j\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.577344 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:15 crc kubenswrapper[4982]: I0224 16:25:15.650702 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 24 16:25:16 crc kubenswrapper[4982]: I0224 16:25:16.149577 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 24 16:25:16 crc kubenswrapper[4982]: I0224 16:25:16.763003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a","Type":"ContainerStarted","Data":"2fb6a8ca80a177278ad5623772b0e02da65dbdc7ef6b0601ae9e9e3c34d169a3"} Feb 24 16:25:17 crc kubenswrapper[4982]: I0224 16:25:17.777547 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"92ce340a-a0e5-4ab6-984a-17b5b02bfa0a","Type":"ContainerStarted","Data":"f07892e6d13bd1f9338a37e4cde32e595a4e65fbe467ad754b6a750b7b338f9d"} Feb 24 16:25:38 crc kubenswrapper[4982]: I0224 16:25:38.738256 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:25:38 crc kubenswrapper[4982]: I0224 16:25:38.738987 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.573986 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=29.43065126 podStartE2EDuration="30.573961748s" podCreationTimestamp="2026-02-24 16:25:15 +0000 UTC" firstStartedPulling="2026-02-24 16:25:16.152943922 +0000 UTC m=+5777.772002425" lastFinishedPulling="2026-02-24 16:25:17.29625438 +0000 UTC m=+5778.915312913" observedRunningTime="2026-02-24 16:25:17.792706343 +0000 UTC m=+5779.411764856" watchObservedRunningTime="2026-02-24 16:25:45.573961748 +0000 UTC m=+5807.193020241" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.582432 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nsv2s/must-gather-7n947"] Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.584547 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.586839 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nsv2s"/"kube-root-ca.crt" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.586877 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nsv2s"/"default-dockercfg-zn6c7" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.586951 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nsv2s"/"openshift-service-ca.crt" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.594231 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nsv2s/must-gather-7n947"] Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.720856 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84079553-029b-4043-a50c-b7ddd543dadb-must-gather-output\") pod \"must-gather-7n947\" (UID: \"84079553-029b-4043-a50c-b7ddd543dadb\") " pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.721445 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97pz\" (UniqueName: \"kubernetes.io/projected/84079553-029b-4043-a50c-b7ddd543dadb-kube-api-access-z97pz\") pod \"must-gather-7n947\" (UID: \"84079553-029b-4043-a50c-b7ddd543dadb\") " pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.823743 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84079553-029b-4043-a50c-b7ddd543dadb-must-gather-output\") pod \"must-gather-7n947\" (UID: \"84079553-029b-4043-a50c-b7ddd543dadb\") " pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.823892 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97pz\" (UniqueName: \"kubernetes.io/projected/84079553-029b-4043-a50c-b7ddd543dadb-kube-api-access-z97pz\") pod \"must-gather-7n947\" (UID: \"84079553-029b-4043-a50c-b7ddd543dadb\") " pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:25:45 crc kubenswrapper[4982]: I0224 16:25:45.824165 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84079553-029b-4043-a50c-b7ddd543dadb-must-gather-output\") pod \"must-gather-7n947\" (UID: \"84079553-029b-4043-a50c-b7ddd543dadb\") " pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:25:46 crc kubenswrapper[4982]: I0224 16:25:46.668454 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97pz\" (UniqueName: \"kubernetes.io/projected/84079553-029b-4043-a50c-b7ddd543dadb-kube-api-access-z97pz\") pod \"must-gather-7n947\" (UID: \"84079553-029b-4043-a50c-b7ddd543dadb\") " pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:25:46 crc kubenswrapper[4982]: I0224 16:25:46.804666 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:25:47 crc kubenswrapper[4982]: I0224 16:25:47.314119 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nsv2s/must-gather-7n947"] Feb 24 16:25:48 crc kubenswrapper[4982]: I0224 16:25:48.226622 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/must-gather-7n947" event={"ID":"84079553-029b-4043-a50c-b7ddd543dadb","Type":"ContainerStarted","Data":"948905b1e1ade634a0940ca5bd736e5146d414dca6821c9da9a5cbbc3791a481"} Feb 24 16:25:55 crc kubenswrapper[4982]: I0224 16:25:55.312188 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/must-gather-7n947" event={"ID":"84079553-029b-4043-a50c-b7ddd543dadb","Type":"ContainerStarted","Data":"1fe256fa2716cf1b19b676369e86a29c69e9e687402238ce67ca03e0a3e4040e"} Feb 24 16:25:55 crc kubenswrapper[4982]: I0224 16:25:55.312785 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/must-gather-7n947" event={"ID":"84079553-029b-4043-a50c-b7ddd543dadb","Type":"ContainerStarted","Data":"23920e794895d9d49785bee59bb4875d0f0ef05c9e5d420d706a5e403b94d9e2"} Feb 24 16:25:55 crc kubenswrapper[4982]: I0224 16:25:55.329844 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nsv2s/must-gather-7n947" podStartSLOduration=3.062889638 podStartE2EDuration="10.329822739s" podCreationTimestamp="2026-02-24 16:25:45 +0000 UTC" firstStartedPulling="2026-02-24 16:25:47.321157018 +0000 UTC m=+5808.940215511" lastFinishedPulling="2026-02-24 16:25:54.588090089 +0000 UTC m=+5816.207148612" observedRunningTime="2026-02-24 16:25:55.3250984 +0000 UTC m=+5816.944156903" watchObservedRunningTime="2026-02-24 16:25:55.329822739 +0000 UTC m=+5816.948881242" Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.152623 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532506-xzf46"] Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.154602 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532506-xzf46" Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.156988 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.157348 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.158412 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.179457 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532506-xzf46"] Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.217522 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z89p\" (UniqueName: \"kubernetes.io/projected/82beadda-27e6-497a-8271-54e32586942c-kube-api-access-5z89p\") pod \"auto-csr-approver-29532506-xzf46\" (UID: \"82beadda-27e6-497a-8271-54e32586942c\") " pod="openshift-infra/auto-csr-approver-29532506-xzf46" Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.319580 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z89p\" (UniqueName: \"kubernetes.io/projected/82beadda-27e6-497a-8271-54e32586942c-kube-api-access-5z89p\") pod \"auto-csr-approver-29532506-xzf46\" (UID: \"82beadda-27e6-497a-8271-54e32586942c\") " pod="openshift-infra/auto-csr-approver-29532506-xzf46" Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.346799 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z89p\" (UniqueName: \"kubernetes.io/projected/82beadda-27e6-497a-8271-54e32586942c-kube-api-access-5z89p\") pod \"auto-csr-approver-29532506-xzf46\" (UID: \"82beadda-27e6-497a-8271-54e32586942c\") " pod="openshift-infra/auto-csr-approver-29532506-xzf46" Feb 24 16:26:00 crc kubenswrapper[4982]: I0224 16:26:00.480283 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532506-xzf46" Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.171689 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532506-xzf46"] Feb 24 16:26:01 crc kubenswrapper[4982]: W0224 16:26:01.268261 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82beadda_27e6_497a_8271_54e32586942c.slice/crio-2c36ccc99b216a681e1a942dd7ef2802e10afede2fd13ed58f1b00e4a336224e WatchSource:0}: Error finding container 2c36ccc99b216a681e1a942dd7ef2802e10afede2fd13ed58f1b00e4a336224e: Status 404 returned error can't find the container with id 2c36ccc99b216a681e1a942dd7ef2802e10afede2fd13ed58f1b00e4a336224e Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.389043 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532506-xzf46" event={"ID":"82beadda-27e6-497a-8271-54e32586942c","Type":"ContainerStarted","Data":"2c36ccc99b216a681e1a942dd7ef2802e10afede2fd13ed58f1b00e4a336224e"} Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.495550 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-bx6bh"] Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.496939 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.648259 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63aba6d9-f06b-4377-95dd-549447ea3ae6-host\") pod \"crc-debug-bx6bh\" (UID: \"63aba6d9-f06b-4377-95dd-549447ea3ae6\") " pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.648570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qbqn\" (UniqueName: \"kubernetes.io/projected/63aba6d9-f06b-4377-95dd-549447ea3ae6-kube-api-access-8qbqn\") pod \"crc-debug-bx6bh\" (UID: \"63aba6d9-f06b-4377-95dd-549447ea3ae6\") " pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.750849 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63aba6d9-f06b-4377-95dd-549447ea3ae6-host\") pod \"crc-debug-bx6bh\" (UID: \"63aba6d9-f06b-4377-95dd-549447ea3ae6\") " pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.750920 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qbqn\" (UniqueName: \"kubernetes.io/projected/63aba6d9-f06b-4377-95dd-549447ea3ae6-kube-api-access-8qbqn\") pod \"crc-debug-bx6bh\" (UID: \"63aba6d9-f06b-4377-95dd-549447ea3ae6\") " pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.752040 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63aba6d9-f06b-4377-95dd-549447ea3ae6-host\") pod \"crc-debug-bx6bh\" (UID: \"63aba6d9-f06b-4377-95dd-549447ea3ae6\") " pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.779326 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qbqn\" (UniqueName: \"kubernetes.io/projected/63aba6d9-f06b-4377-95dd-549447ea3ae6-kube-api-access-8qbqn\") pod \"crc-debug-bx6bh\" (UID: \"63aba6d9-f06b-4377-95dd-549447ea3ae6\") " pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:26:01 crc kubenswrapper[4982]: I0224 16:26:01.814329 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:26:01 crc kubenswrapper[4982]: W0224 16:26:01.861437 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63aba6d9_f06b_4377_95dd_549447ea3ae6.slice/crio-99e551b035bcfe46eae0e5fcc4ec715c6a2b891607b3ebe609860ef007236ee0 WatchSource:0}: Error finding container 99e551b035bcfe46eae0e5fcc4ec715c6a2b891607b3ebe609860ef007236ee0: Status 404 returned error can't find the container with id 99e551b035bcfe46eae0e5fcc4ec715c6a2b891607b3ebe609860ef007236ee0 Feb 24 16:26:02 crc kubenswrapper[4982]: I0224 16:26:02.400224 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" event={"ID":"63aba6d9-f06b-4377-95dd-549447ea3ae6","Type":"ContainerStarted","Data":"99e551b035bcfe46eae0e5fcc4ec715c6a2b891607b3ebe609860ef007236ee0"} Feb 24 16:26:03 crc kubenswrapper[4982]: I0224 16:26:03.411562 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532506-xzf46" event={"ID":"82beadda-27e6-497a-8271-54e32586942c","Type":"ContainerStarted","Data":"448942b858e9325697d1d3c7a22ef51a96a486af67db959b9e95a723446a0d3a"} Feb 24 16:26:03 crc kubenswrapper[4982]: I0224 16:26:03.431139 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532506-xzf46" podStartSLOduration=2.027738791 podStartE2EDuration="3.431122659s" podCreationTimestamp="2026-02-24 16:26:00 +0000 UTC" firstStartedPulling="2026-02-24 16:26:01.270552399 +0000 UTC m=+5822.889610882" lastFinishedPulling="2026-02-24 16:26:02.673936257 +0000 UTC m=+5824.292994750" observedRunningTime="2026-02-24 16:26:03.425952237 +0000 UTC m=+5825.045010730" watchObservedRunningTime="2026-02-24 16:26:03.431122659 +0000 UTC m=+5825.050181152" Feb 24 16:26:04 crc kubenswrapper[4982]: I0224 16:26:04.424827 4982 generic.go:334] "Generic (PLEG): container finished" podID="82beadda-27e6-497a-8271-54e32586942c" containerID="448942b858e9325697d1d3c7a22ef51a96a486af67db959b9e95a723446a0d3a" exitCode=0 Feb 24 16:26:04 crc kubenswrapper[4982]: I0224 16:26:04.424898 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532506-xzf46" event={"ID":"82beadda-27e6-497a-8271-54e32586942c","Type":"ContainerDied","Data":"448942b858e9325697d1d3c7a22ef51a96a486af67db959b9e95a723446a0d3a"} Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.001559 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532506-xzf46" Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.165809 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z89p\" (UniqueName: \"kubernetes.io/projected/82beadda-27e6-497a-8271-54e32586942c-kube-api-access-5z89p\") pod \"82beadda-27e6-497a-8271-54e32586942c\" (UID: \"82beadda-27e6-497a-8271-54e32586942c\") " Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.174758 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82beadda-27e6-497a-8271-54e32586942c-kube-api-access-5z89p" (OuterVolumeSpecName: "kube-api-access-5z89p") pod "82beadda-27e6-497a-8271-54e32586942c" (UID: "82beadda-27e6-497a-8271-54e32586942c"). InnerVolumeSpecName "kube-api-access-5z89p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.269495 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z89p\" (UniqueName: \"kubernetes.io/projected/82beadda-27e6-497a-8271-54e32586942c-kube-api-access-5z89p\") on node \"crc\" DevicePath \"\"" Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.445543 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532506-xzf46" event={"ID":"82beadda-27e6-497a-8271-54e32586942c","Type":"ContainerDied","Data":"2c36ccc99b216a681e1a942dd7ef2802e10afede2fd13ed58f1b00e4a336224e"} Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.445814 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c36ccc99b216a681e1a942dd7ef2802e10afede2fd13ed58f1b00e4a336224e" Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.445590 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532506-xzf46" Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.532258 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532500-v92lg"] Feb 24 16:26:06 crc kubenswrapper[4982]: I0224 16:26:06.551852 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532500-v92lg"] Feb 24 16:26:07 crc kubenswrapper[4982]: I0224 16:26:07.158562 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857bf54b-e810-401a-b304-6dae26a64c8a" path="/var/lib/kubelet/pods/857bf54b-e810-401a-b304-6dae26a64c8a/volumes" Feb 24 16:26:08 crc kubenswrapper[4982]: I0224 16:26:08.737787 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:26:08 crc kubenswrapper[4982]: I0224 16:26:08.738092 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:26:14 crc kubenswrapper[4982]: I0224 16:26:14.547029 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" event={"ID":"63aba6d9-f06b-4377-95dd-549447ea3ae6","Type":"ContainerStarted","Data":"0c4eec9c2d0e77d103c55002b5f4a9947a1896d4a975e45a00837fc4e50aaf66"} Feb 24 16:26:14 crc kubenswrapper[4982]: I0224 16:26:14.563285 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" podStartSLOduration=1.485429727 podStartE2EDuration="13.563269631s" podCreationTimestamp="2026-02-24 16:26:01 +0000 UTC" firstStartedPulling="2026-02-24 16:26:01.866744711 +0000 UTC m=+5823.485803204" lastFinishedPulling="2026-02-24 16:26:13.944584625 +0000 UTC m=+5835.563643108" observedRunningTime="2026-02-24 16:26:14.558215044 +0000 UTC m=+5836.177273537" watchObservedRunningTime="2026-02-24 16:26:14.563269631 +0000 UTC m=+5836.182328114" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.767368 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zx4fv"] Feb 24 16:26:21 crc kubenswrapper[4982]: E0224 16:26:21.768800 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82beadda-27e6-497a-8271-54e32586942c" containerName="oc" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.768817 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="82beadda-27e6-497a-8271-54e32586942c" containerName="oc" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.769044 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="82beadda-27e6-497a-8271-54e32586942c" containerName="oc" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.770960 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.784749 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zx4fv"] Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.844016 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-catalog-content\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.844418 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bt92\" (UniqueName: \"kubernetes.io/projected/ee198856-dc40-48cd-9221-379e466543b8-kube-api-access-5bt92\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.844834 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-utilities\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.947149 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-utilities\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.947199 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-catalog-content\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.947308 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bt92\" (UniqueName: \"kubernetes.io/projected/ee198856-dc40-48cd-9221-379e466543b8-kube-api-access-5bt92\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.947737 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-utilities\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.947845 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-catalog-content\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:21 crc kubenswrapper[4982]: I0224 16:26:21.966284 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bt92\" (UniqueName: \"kubernetes.io/projected/ee198856-dc40-48cd-9221-379e466543b8-kube-api-access-5bt92\") pod \"redhat-operators-zx4fv\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:22 crc kubenswrapper[4982]: I0224 16:26:22.097815 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:22 crc kubenswrapper[4982]: I0224 16:26:22.786778 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zx4fv"] Feb 24 16:26:23 crc kubenswrapper[4982]: W0224 16:26:23.477744 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee198856_dc40_48cd_9221_379e466543b8.slice/crio-845dc9fc76289db6e3677b5391bac7f835773bda2b81898acde737c12557bd50 WatchSource:0}: Error finding container 845dc9fc76289db6e3677b5391bac7f835773bda2b81898acde737c12557bd50: Status 404 returned error can't find the container with id 845dc9fc76289db6e3677b5391bac7f835773bda2b81898acde737c12557bd50 Feb 24 16:26:23 crc kubenswrapper[4982]: I0224 16:26:23.655914 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx4fv" event={"ID":"ee198856-dc40-48cd-9221-379e466543b8","Type":"ContainerStarted","Data":"845dc9fc76289db6e3677b5391bac7f835773bda2b81898acde737c12557bd50"} Feb 24 16:26:24 crc kubenswrapper[4982]: I0224 16:26:24.669898 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee198856-dc40-48cd-9221-379e466543b8" containerID="3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc" exitCode=0 Feb 24 16:26:24 crc kubenswrapper[4982]: I0224 16:26:24.669951 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx4fv" event={"ID":"ee198856-dc40-48cd-9221-379e466543b8","Type":"ContainerDied","Data":"3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc"} Feb 24 16:26:28 crc kubenswrapper[4982]: I0224 16:26:28.714133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx4fv" event={"ID":"ee198856-dc40-48cd-9221-379e466543b8","Type":"ContainerStarted","Data":"516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357"} Feb 24 16:26:34 crc kubenswrapper[4982]: I0224 16:26:34.777252 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee198856-dc40-48cd-9221-379e466543b8" containerID="516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357" exitCode=0 Feb 24 16:26:34 crc kubenswrapper[4982]: I0224 16:26:34.777447 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx4fv" event={"ID":"ee198856-dc40-48cd-9221-379e466543b8","Type":"ContainerDied","Data":"516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357"} Feb 24 16:26:35 crc kubenswrapper[4982]: I0224 16:26:35.789749 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx4fv" event={"ID":"ee198856-dc40-48cd-9221-379e466543b8","Type":"ContainerStarted","Data":"1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d"} Feb 24 16:26:35 crc kubenswrapper[4982]: I0224 16:26:35.816406 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zx4fv" podStartSLOduration=3.992674398 podStartE2EDuration="14.816387791s" podCreationTimestamp="2026-02-24 16:26:21 +0000 UTC" firstStartedPulling="2026-02-24 16:26:24.675762738 +0000 UTC m=+5846.294821231" lastFinishedPulling="2026-02-24 16:26:35.499476131 +0000 UTC m=+5857.118534624" observedRunningTime="2026-02-24 16:26:35.809799791 +0000 UTC m=+5857.428858284" watchObservedRunningTime="2026-02-24 16:26:35.816387791 +0000 UTC m=+5857.435446284" Feb 24 16:26:38 crc kubenswrapper[4982]: I0224 16:26:38.737932 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:26:38 crc kubenswrapper[4982]: I0224 16:26:38.738451 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:26:38 crc kubenswrapper[4982]: I0224 16:26:38.738544 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 16:26:38 crc kubenswrapper[4982]: I0224 16:26:38.739646 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a897f4a77a414cc436dfc13cb27f56aa83c877cbc34f4775483201dc8bc5633"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 16:26:38 crc kubenswrapper[4982]: I0224 16:26:38.739711 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://3a897f4a77a414cc436dfc13cb27f56aa83c877cbc34f4775483201dc8bc5633" gracePeriod=600 Feb 24 16:26:39 crc kubenswrapper[4982]: I0224 16:26:39.831453 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="3a897f4a77a414cc436dfc13cb27f56aa83c877cbc34f4775483201dc8bc5633" exitCode=0 Feb 24 16:26:39 crc kubenswrapper[4982]: I0224 16:26:39.831551 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"3a897f4a77a414cc436dfc13cb27f56aa83c877cbc34f4775483201dc8bc5633"} Feb 24 16:26:39 crc kubenswrapper[4982]: I0224 16:26:39.831919 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe"} Feb 24 16:26:39 crc kubenswrapper[4982]: I0224 16:26:39.831946 4982 scope.go:117] "RemoveContainer" containerID="0e15981d0f833bcbde1584c8d7538f882318e3df1d5b184a59233f135cbf5304" Feb 24 16:26:42 crc kubenswrapper[4982]: I0224 16:26:42.098373 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:42 crc kubenswrapper[4982]: I0224 16:26:42.099028 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:26:43 crc kubenswrapper[4982]: I0224 16:26:43.159396 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zx4fv" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="registry-server" probeResult="failure" output=< Feb 24 16:26:43 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:26:43 crc kubenswrapper[4982]: > Feb 24 16:26:53 crc kubenswrapper[4982]: I0224 16:26:53.310170 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zx4fv" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="registry-server" probeResult="failure" output=< Feb 24 16:26:53 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:26:53 crc kubenswrapper[4982]: > Feb 24 16:26:54 crc kubenswrapper[4982]: I0224 16:26:54.999291 4982 scope.go:117] "RemoveContainer" containerID="ab15444abce6358a1042a6e36fae73eec4adc2fbcf0545f177af5ff05f2ff0d2" Feb 24 16:27:03 crc kubenswrapper[4982]: I0224 16:27:03.062625 4982 generic.go:334] "Generic (PLEG): container finished" podID="63aba6d9-f06b-4377-95dd-549447ea3ae6" containerID="0c4eec9c2d0e77d103c55002b5f4a9947a1896d4a975e45a00837fc4e50aaf66" exitCode=0 Feb 24 16:27:03 crc kubenswrapper[4982]: I0224 16:27:03.062729 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" event={"ID":"63aba6d9-f06b-4377-95dd-549447ea3ae6","Type":"ContainerDied","Data":"0c4eec9c2d0e77d103c55002b5f4a9947a1896d4a975e45a00837fc4e50aaf66"} Feb 24 16:27:03 crc kubenswrapper[4982]: I0224 16:27:03.164859 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zx4fv" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="registry-server" probeResult="failure" output=< Feb 24 16:27:03 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:27:03 crc kubenswrapper[4982]: > Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.228404 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.268604 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-bx6bh"] Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.278291 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-bx6bh"] Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.362417 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63aba6d9-f06b-4377-95dd-549447ea3ae6-host\") pod \"63aba6d9-f06b-4377-95dd-549447ea3ae6\" (UID: \"63aba6d9-f06b-4377-95dd-549447ea3ae6\") " Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.362660 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63aba6d9-f06b-4377-95dd-549447ea3ae6-host" (OuterVolumeSpecName: "host") pod "63aba6d9-f06b-4377-95dd-549447ea3ae6" (UID: "63aba6d9-f06b-4377-95dd-549447ea3ae6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.362804 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qbqn\" (UniqueName: \"kubernetes.io/projected/63aba6d9-f06b-4377-95dd-549447ea3ae6-kube-api-access-8qbqn\") pod \"63aba6d9-f06b-4377-95dd-549447ea3ae6\" (UID: \"63aba6d9-f06b-4377-95dd-549447ea3ae6\") " Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.363912 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/63aba6d9-f06b-4377-95dd-549447ea3ae6-host\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.371074 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63aba6d9-f06b-4377-95dd-549447ea3ae6-kube-api-access-8qbqn" (OuterVolumeSpecName: "kube-api-access-8qbqn") pod "63aba6d9-f06b-4377-95dd-549447ea3ae6" (UID: "63aba6d9-f06b-4377-95dd-549447ea3ae6"). InnerVolumeSpecName "kube-api-access-8qbqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:27:04 crc kubenswrapper[4982]: I0224 16:27:04.466439 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qbqn\" (UniqueName: \"kubernetes.io/projected/63aba6d9-f06b-4377-95dd-549447ea3ae6-kube-api-access-8qbqn\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.113277 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e551b035bcfe46eae0e5fcc4ec715c6a2b891607b3ebe609860ef007236ee0" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.113407 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-bx6bh" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.163452 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63aba6d9-f06b-4377-95dd-549447ea3ae6" path="/var/lib/kubelet/pods/63aba6d9-f06b-4377-95dd-549447ea3ae6/volumes" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.491186 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-gshcl"] Feb 24 16:27:05 crc kubenswrapper[4982]: E0224 16:27:05.491811 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63aba6d9-f06b-4377-95dd-549447ea3ae6" containerName="container-00" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.491845 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="63aba6d9-f06b-4377-95dd-549447ea3ae6" containerName="container-00" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.492310 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="63aba6d9-f06b-4377-95dd-549447ea3ae6" containerName="container-00" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.493280 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.592138 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-host\") pod \"crc-debug-gshcl\" (UID: \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\") " pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.592412 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwhh\" (UniqueName: \"kubernetes.io/projected/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-kube-api-access-brwhh\") pod \"crc-debug-gshcl\" (UID: \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\") " pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.695335 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-host\") pod \"crc-debug-gshcl\" (UID: \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\") " pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.695777 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brwhh\" (UniqueName: \"kubernetes.io/projected/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-kube-api-access-brwhh\") pod \"crc-debug-gshcl\" (UID: \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\") " pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.695476 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-host\") pod \"crc-debug-gshcl\" (UID: \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\") " pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.718114 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brwhh\" (UniqueName: \"kubernetes.io/projected/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-kube-api-access-brwhh\") pod \"crc-debug-gshcl\" (UID: \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\") " pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:05 crc kubenswrapper[4982]: I0224 16:27:05.814869 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:06 crc kubenswrapper[4982]: I0224 16:27:06.129136 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/crc-debug-gshcl" event={"ID":"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9","Type":"ContainerStarted","Data":"6201ca3a08d410bd9af45cb704cc723efaf680db2ac6ce272346be698ed928a0"} Feb 24 16:27:07 crc kubenswrapper[4982]: I0224 16:27:07.143277 4982 generic.go:334] "Generic (PLEG): container finished" podID="c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9" containerID="c619d190f1b3121f09aa0fad0f669015c23b44331a50435fa545378ea9b93e3d" exitCode=0 Feb 24 16:27:07 crc kubenswrapper[4982]: I0224 16:27:07.143336 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/crc-debug-gshcl" event={"ID":"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9","Type":"ContainerDied","Data":"c619d190f1b3121f09aa0fad0f669015c23b44331a50435fa545378ea9b93e3d"} Feb 24 16:27:08 crc kubenswrapper[4982]: I0224 16:27:08.271017 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:08 crc kubenswrapper[4982]: I0224 16:27:08.358717 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-host\") pod \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\" (UID: \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\") " Feb 24 16:27:08 crc kubenswrapper[4982]: I0224 16:27:08.359064 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brwhh\" (UniqueName: \"kubernetes.io/projected/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-kube-api-access-brwhh\") pod \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\" (UID: \"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9\") " Feb 24 16:27:08 crc kubenswrapper[4982]: I0224 16:27:08.358837 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-host" (OuterVolumeSpecName: "host") pod "c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9" (UID: "c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 16:27:08 crc kubenswrapper[4982]: I0224 16:27:08.359774 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-host\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:08 crc kubenswrapper[4982]: I0224 16:27:08.962348 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-kube-api-access-brwhh" (OuterVolumeSpecName: "kube-api-access-brwhh") pod "c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9" (UID: "c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9"). InnerVolumeSpecName "kube-api-access-brwhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:27:08 crc kubenswrapper[4982]: I0224 16:27:08.972438 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brwhh\" (UniqueName: \"kubernetes.io/projected/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9-kube-api-access-brwhh\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:09 crc kubenswrapper[4982]: I0224 16:27:09.163290 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-gshcl" Feb 24 16:27:09 crc kubenswrapper[4982]: I0224 16:27:09.164524 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/crc-debug-gshcl" event={"ID":"c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9","Type":"ContainerDied","Data":"6201ca3a08d410bd9af45cb704cc723efaf680db2ac6ce272346be698ed928a0"} Feb 24 16:27:09 crc kubenswrapper[4982]: I0224 16:27:09.164559 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6201ca3a08d410bd9af45cb704cc723efaf680db2ac6ce272346be698ed928a0" Feb 24 16:27:09 crc kubenswrapper[4982]: I0224 16:27:09.289077 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-gshcl"] Feb 24 16:27:09 crc kubenswrapper[4982]: I0224 16:27:09.300947 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-gshcl"] Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.501205 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-9fkw5"] Feb 24 16:27:10 crc kubenswrapper[4982]: E0224 16:27:10.502100 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9" containerName="container-00" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.502118 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9" containerName="container-00" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.502438 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9" containerName="container-00" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.503495 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.609705 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twldj\" (UniqueName: \"kubernetes.io/projected/62230e38-7498-4ba2-adb4-f5dbacedc6bd-kube-api-access-twldj\") pod \"crc-debug-9fkw5\" (UID: \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\") " pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.610241 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62230e38-7498-4ba2-adb4-f5dbacedc6bd-host\") pod \"crc-debug-9fkw5\" (UID: \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\") " pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.712413 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twldj\" (UniqueName: \"kubernetes.io/projected/62230e38-7498-4ba2-adb4-f5dbacedc6bd-kube-api-access-twldj\") pod \"crc-debug-9fkw5\" (UID: \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\") " pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.712512 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62230e38-7498-4ba2-adb4-f5dbacedc6bd-host\") pod \"crc-debug-9fkw5\" (UID: \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\") " pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.712674 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62230e38-7498-4ba2-adb4-f5dbacedc6bd-host\") pod \"crc-debug-9fkw5\" (UID: \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\") " pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:10 crc kubenswrapper[4982]: I0224 16:27:10.963245 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twldj\" (UniqueName: \"kubernetes.io/projected/62230e38-7498-4ba2-adb4-f5dbacedc6bd-kube-api-access-twldj\") pod \"crc-debug-9fkw5\" (UID: \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\") " pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:11 crc kubenswrapper[4982]: I0224 16:27:11.123888 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:11 crc kubenswrapper[4982]: I0224 16:27:11.162791 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9" path="/var/lib/kubelet/pods/c5d7cac1-e18c-427c-bb5a-b5e9a3c467e9/volumes" Feb 24 16:27:11 crc kubenswrapper[4982]: I0224 16:27:11.198915 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" event={"ID":"62230e38-7498-4ba2-adb4-f5dbacedc6bd","Type":"ContainerStarted","Data":"c88dff13263a32179bf96a1b1294d40c699cbca58a171f09c41f4dc5846f69f9"} Feb 24 16:27:12 crc kubenswrapper[4982]: I0224 16:27:12.169995 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:27:12 crc kubenswrapper[4982]: I0224 16:27:12.219130 4982 generic.go:334] "Generic (PLEG): container finished" podID="62230e38-7498-4ba2-adb4-f5dbacedc6bd" containerID="0094c3f940a72563b76120c37a59f339778fd8ca1b5687155ae30391c4512711" exitCode=0 Feb 24 16:27:12 crc kubenswrapper[4982]: I0224 16:27:12.219184 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" event={"ID":"62230e38-7498-4ba2-adb4-f5dbacedc6bd","Type":"ContainerDied","Data":"0094c3f940a72563b76120c37a59f339778fd8ca1b5687155ae30391c4512711"} Feb 24 16:27:12 crc kubenswrapper[4982]: I0224 16:27:12.237025 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:27:12 crc kubenswrapper[4982]: I0224 16:27:12.284000 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-9fkw5"] Feb 24 16:27:12 crc kubenswrapper[4982]: I0224 16:27:12.294434 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nsv2s/crc-debug-9fkw5"] Feb 24 16:27:12 crc kubenswrapper[4982]: I0224 16:27:12.408440 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zx4fv"] Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.230469 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zx4fv" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="registry-server" containerID="cri-o://1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d" gracePeriod=2 Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.513422 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.576073 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62230e38-7498-4ba2-adb4-f5dbacedc6bd-host\") pod \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\" (UID: \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\") " Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.576219 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62230e38-7498-4ba2-adb4-f5dbacedc6bd-host" (OuterVolumeSpecName: "host") pod "62230e38-7498-4ba2-adb4-f5dbacedc6bd" (UID: "62230e38-7498-4ba2-adb4-f5dbacedc6bd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.576318 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twldj\" (UniqueName: \"kubernetes.io/projected/62230e38-7498-4ba2-adb4-f5dbacedc6bd-kube-api-access-twldj\") pod \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\" (UID: \"62230e38-7498-4ba2-adb4-f5dbacedc6bd\") " Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.577119 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62230e38-7498-4ba2-adb4-f5dbacedc6bd-host\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.587816 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62230e38-7498-4ba2-adb4-f5dbacedc6bd-kube-api-access-twldj" (OuterVolumeSpecName: "kube-api-access-twldj") pod "62230e38-7498-4ba2-adb4-f5dbacedc6bd" (UID: "62230e38-7498-4ba2-adb4-f5dbacedc6bd"). InnerVolumeSpecName "kube-api-access-twldj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.679126 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twldj\" (UniqueName: \"kubernetes.io/projected/62230e38-7498-4ba2-adb4-f5dbacedc6bd-kube-api-access-twldj\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:13 crc kubenswrapper[4982]: I0224 16:27:13.990794 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.089703 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-catalog-content\") pod \"ee198856-dc40-48cd-9221-379e466543b8\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.089745 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bt92\" (UniqueName: \"kubernetes.io/projected/ee198856-dc40-48cd-9221-379e466543b8-kube-api-access-5bt92\") pod \"ee198856-dc40-48cd-9221-379e466543b8\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.089767 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-utilities\") pod \"ee198856-dc40-48cd-9221-379e466543b8\" (UID: \"ee198856-dc40-48cd-9221-379e466543b8\") " Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.090872 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-utilities" (OuterVolumeSpecName: "utilities") pod "ee198856-dc40-48cd-9221-379e466543b8" (UID: "ee198856-dc40-48cd-9221-379e466543b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.091181 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.093871 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee198856-dc40-48cd-9221-379e466543b8-kube-api-access-5bt92" (OuterVolumeSpecName: "kube-api-access-5bt92") pod "ee198856-dc40-48cd-9221-379e466543b8" (UID: "ee198856-dc40-48cd-9221-379e466543b8"). InnerVolumeSpecName "kube-api-access-5bt92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.193460 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bt92\" (UniqueName: \"kubernetes.io/projected/ee198856-dc40-48cd-9221-379e466543b8-kube-api-access-5bt92\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.211822 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee198856-dc40-48cd-9221-379e466543b8" (UID: "ee198856-dc40-48cd-9221-379e466543b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.240331 4982 scope.go:117] "RemoveContainer" containerID="0094c3f940a72563b76120c37a59f339778fd8ca1b5687155ae30391c4512711" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.240341 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/crc-debug-9fkw5" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.242730 4982 generic.go:334] "Generic (PLEG): container finished" podID="ee198856-dc40-48cd-9221-379e466543b8" containerID="1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d" exitCode=0 Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.242773 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx4fv" event={"ID":"ee198856-dc40-48cd-9221-379e466543b8","Type":"ContainerDied","Data":"1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d"} Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.242797 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zx4fv" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.242803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zx4fv" event={"ID":"ee198856-dc40-48cd-9221-379e466543b8","Type":"ContainerDied","Data":"845dc9fc76289db6e3677b5391bac7f835773bda2b81898acde737c12557bd50"} Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.296173 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee198856-dc40-48cd-9221-379e466543b8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.306995 4982 scope.go:117] "RemoveContainer" containerID="1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.354428 4982 scope.go:117] "RemoveContainer" containerID="516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.371315 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zx4fv"] Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.374928 4982 scope.go:117] "RemoveContainer" containerID="3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.383196 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zx4fv"] Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.421858 4982 scope.go:117] "RemoveContainer" containerID="1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d" Feb 24 16:27:14 crc kubenswrapper[4982]: E0224 16:27:14.422449 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d\": container with ID starting with 1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d not found: ID does not exist" containerID="1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.422488 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d"} err="failed to get container status \"1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d\": rpc error: code = NotFound desc = could not find container \"1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d\": container with ID starting with 1e536ae7da618accdcd1553f1d80477cc50d7f3f2c13827bd05175357df6d31d not found: ID does not exist" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.422530 4982 scope.go:117] "RemoveContainer" containerID="516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357" Feb 24 16:27:14 crc kubenswrapper[4982]: E0224 16:27:14.423057 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357\": container with ID starting with 516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357 not found: ID does not exist" containerID="516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.423096 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357"} err="failed to get container status \"516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357\": rpc error: code = NotFound desc = could not find container \"516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357\": container with ID starting with 516092d3b96b35b79e70e735a2cf2d85bf389c2c8b3db444c01e160b264a3357 not found: ID does not exist" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.423120 4982 scope.go:117] "RemoveContainer" containerID="3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc" Feb 24 16:27:14 crc kubenswrapper[4982]: E0224 16:27:14.423481 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc\": container with ID starting with 3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc not found: ID does not exist" containerID="3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc" Feb 24 16:27:14 crc kubenswrapper[4982]: I0224 16:27:14.423526 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc"} err="failed to get container status \"3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc\": rpc error: code = NotFound desc = could not find container \"3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc\": container with ID starting with 3da05b86c9023ef185a56bd2bfb392e1ffd6ad4cd32ec0a941857b839f3197cc not found: ID does not exist" Feb 24 16:27:15 crc kubenswrapper[4982]: I0224 16:27:15.166579 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62230e38-7498-4ba2-adb4-f5dbacedc6bd" path="/var/lib/kubelet/pods/62230e38-7498-4ba2-adb4-f5dbacedc6bd/volumes" Feb 24 16:27:15 crc kubenswrapper[4982]: I0224 16:27:15.167200 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee198856-dc40-48cd-9221-379e466543b8" path="/var/lib/kubelet/pods/ee198856-dc40-48cd-9221-379e466543b8/volumes" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.850934 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dd269"] Feb 24 16:27:32 crc kubenswrapper[4982]: E0224 16:27:32.852275 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62230e38-7498-4ba2-adb4-f5dbacedc6bd" containerName="container-00" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.852386 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="62230e38-7498-4ba2-adb4-f5dbacedc6bd" containerName="container-00" Feb 24 16:27:32 crc kubenswrapper[4982]: E0224 16:27:32.852403 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="extract-content" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.852412 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="extract-content" Feb 24 16:27:32 crc kubenswrapper[4982]: E0224 16:27:32.852429 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="registry-server" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.852437 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="registry-server" Feb 24 16:27:32 crc kubenswrapper[4982]: E0224 16:27:32.852471 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="extract-utilities" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.852480 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="extract-utilities" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.853554 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee198856-dc40-48cd-9221-379e466543b8" containerName="registry-server" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.853598 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="62230e38-7498-4ba2-adb4-f5dbacedc6bd" containerName="container-00" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.855786 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.869234 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd269"] Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.952596 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-catalog-content\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.952651 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlcv\" (UniqueName: \"kubernetes.io/projected/69354491-d341-499d-a5d0-5b3ff22f3808-kube-api-access-bzlcv\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:32 crc kubenswrapper[4982]: I0224 16:27:32.952855 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-utilities\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:33 crc kubenswrapper[4982]: I0224 16:27:33.055580 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-utilities\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:33 crc kubenswrapper[4982]: I0224 16:27:33.055780 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-catalog-content\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:33 crc kubenswrapper[4982]: I0224 16:27:33.055809 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzlcv\" (UniqueName: \"kubernetes.io/projected/69354491-d341-499d-a5d0-5b3ff22f3808-kube-api-access-bzlcv\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:33 crc kubenswrapper[4982]: I0224 16:27:33.056051 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-utilities\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:33 crc kubenswrapper[4982]: I0224 16:27:33.056213 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-catalog-content\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:33 crc kubenswrapper[4982]: I0224 16:27:33.076066 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzlcv\" (UniqueName: \"kubernetes.io/projected/69354491-d341-499d-a5d0-5b3ff22f3808-kube-api-access-bzlcv\") pod \"redhat-marketplace-dd269\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:33 crc kubenswrapper[4982]: I0224 16:27:33.189609 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:33 crc kubenswrapper[4982]: I0224 16:27:33.709540 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd269"] Feb 24 16:27:34 crc kubenswrapper[4982]: W0224 16:27:34.077563 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69354491_d341_499d_a5d0_5b3ff22f3808.slice/crio-cd11a3adab68598583a59f912d4caa470acb2e96c0b9443fbde303ee190b1c56 WatchSource:0}: Error finding container cd11a3adab68598583a59f912d4caa470acb2e96c0b9443fbde303ee190b1c56: Status 404 returned error can't find the container with id cd11a3adab68598583a59f912d4caa470acb2e96c0b9443fbde303ee190b1c56 Feb 24 16:27:34 crc kubenswrapper[4982]: I0224 16:27:34.468774 4982 generic.go:334] "Generic (PLEG): container finished" podID="69354491-d341-499d-a5d0-5b3ff22f3808" containerID="d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187" exitCode=0 Feb 24 16:27:34 crc kubenswrapper[4982]: I0224 16:27:34.468819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd269" event={"ID":"69354491-d341-499d-a5d0-5b3ff22f3808","Type":"ContainerDied","Data":"d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187"} Feb 24 16:27:34 crc kubenswrapper[4982]: I0224 16:27:34.468847 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd269" event={"ID":"69354491-d341-499d-a5d0-5b3ff22f3808","Type":"ContainerStarted","Data":"cd11a3adab68598583a59f912d4caa470acb2e96c0b9443fbde303ee190b1c56"} Feb 24 16:27:35 crc kubenswrapper[4982]: I0224 16:27:35.486325 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd269" event={"ID":"69354491-d341-499d-a5d0-5b3ff22f3808","Type":"ContainerStarted","Data":"2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45"} Feb 24 16:27:37 crc kubenswrapper[4982]: I0224 16:27:37.522391 4982 generic.go:334] "Generic (PLEG): container finished" podID="69354491-d341-499d-a5d0-5b3ff22f3808" containerID="2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45" exitCode=0 Feb 24 16:27:37 crc kubenswrapper[4982]: I0224 16:27:37.522469 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd269" event={"ID":"69354491-d341-499d-a5d0-5b3ff22f3808","Type":"ContainerDied","Data":"2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45"} Feb 24 16:27:38 crc kubenswrapper[4982]: I0224 16:27:38.541270 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd269" event={"ID":"69354491-d341-499d-a5d0-5b3ff22f3808","Type":"ContainerStarted","Data":"58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6"} Feb 24 16:27:38 crc kubenswrapper[4982]: I0224 16:27:38.569331 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dd269" podStartSLOduration=3.092821224 podStartE2EDuration="6.569300366s" podCreationTimestamp="2026-02-24 16:27:32 +0000 UTC" firstStartedPulling="2026-02-24 16:27:34.471140046 +0000 UTC m=+5916.090198539" lastFinishedPulling="2026-02-24 16:27:37.947619148 +0000 UTC m=+5919.566677681" observedRunningTime="2026-02-24 16:27:38.566895391 +0000 UTC m=+5920.185953994" watchObservedRunningTime="2026-02-24 16:27:38.569300366 +0000 UTC m=+5920.188358859" Feb 24 16:27:42 crc kubenswrapper[4982]: I0224 16:27:42.285371 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7b46506f-5421-47ab-9ed9-2328c663adb8/aodh-api/0.log" Feb 24 16:27:42 crc kubenswrapper[4982]: I0224 16:27:42.455846 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7b46506f-5421-47ab-9ed9-2328c663adb8/aodh-listener/0.log" Feb 24 16:27:42 crc kubenswrapper[4982]: I0224 16:27:42.475870 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7b46506f-5421-47ab-9ed9-2328c663adb8/aodh-notifier/0.log" Feb 24 16:27:42 crc kubenswrapper[4982]: I0224 16:27:42.482331 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7b46506f-5421-47ab-9ed9-2328c663adb8/aodh-evaluator/0.log" Feb 24 16:27:42 crc kubenswrapper[4982]: I0224 16:27:42.673140 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5466bd89fd-vjk4t_ba8a29e9-d071-470e-80eb-8c749b582614/barbican-api-log/0.log" Feb 24 16:27:42 crc kubenswrapper[4982]: I0224 16:27:42.683904 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5466bd89fd-vjk4t_ba8a29e9-d071-470e-80eb-8c749b582614/barbican-api/0.log" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.192450 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.192557 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.420822 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.616200 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77f5949dd7-rps9v_7334fe93-50f7-486c-a11a-1cac15b026da/barbican-worker/0.log" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.622675 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77797c9b78-rvhwv_17fd2f10-64fe-4299-9f45-b81e02687f53/barbican-keystone-listener/0.log" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.649188 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.703744 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd269"] Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.714045 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77797c9b78-rvhwv_17fd2f10-64fe-4299-9f45-b81e02687f53/barbican-keystone-listener-log/0.log" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.820464 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77f5949dd7-rps9v_7334fe93-50f7-486c-a11a-1cac15b026da/barbican-worker-log/0.log" Feb 24 16:27:43 crc kubenswrapper[4982]: I0224 16:27:43.916842 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc_ff453eac-e860-4c72-9c3c-0aa80e0554d1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:44 crc kubenswrapper[4982]: I0224 16:27:44.114001 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f9dd449a-d430-42cf-8d1a-492c750fde59/ceilometer-notification-agent/0.log" Feb 24 16:27:44 crc kubenswrapper[4982]: I0224 16:27:44.135112 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f9dd449a-d430-42cf-8d1a-492c750fde59/ceilometer-central-agent/0.log" Feb 24 16:27:44 crc kubenswrapper[4982]: I0224 16:27:44.161706 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f9dd449a-d430-42cf-8d1a-492c750fde59/proxy-httpd/0.log" Feb 24 16:27:44 crc kubenswrapper[4982]: I0224 16:27:44.224942 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f9dd449a-d430-42cf-8d1a-492c750fde59/sg-core/0.log" Feb 24 16:27:44 crc kubenswrapper[4982]: I0224 16:27:44.393412 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0aa9e47a-4c17-47f4-9541-60b8f91236fd/cinder-api-log/0.log" Feb 24 16:27:44 crc kubenswrapper[4982]: I0224 16:27:44.393935 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0aa9e47a-4c17-47f4-9541-60b8f91236fd/cinder-api/0.log" Feb 24 16:27:44 crc kubenswrapper[4982]: I0224 16:27:44.777645 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9e704a92-af74-4bf8-bf2f-3d684b08a722/probe/0.log" Feb 24 16:27:44 crc kubenswrapper[4982]: I0224 16:27:44.823804 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9e704a92-af74-4bf8-bf2f-3d684b08a722/cinder-scheduler/0.log" Feb 24 16:27:45 crc kubenswrapper[4982]: I0224 16:27:45.604141 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dd269" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" containerName="registry-server" containerID="cri-o://58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6" gracePeriod=2 Feb 24 16:27:45 crc kubenswrapper[4982]: I0224 16:27:45.654541 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g_17fad281-67b9-48ec-b6a3-82ecd730d659/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:45 crc kubenswrapper[4982]: I0224 16:27:45.654854 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv_56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:45 crc kubenswrapper[4982]: I0224 16:27:45.839776 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-82n9c_4c978f3a-f7d2-4a33-a206-b38bf80aae1f/init/0.log" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.022397 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-82n9c_4c978f3a-f7d2-4a33-a206-b38bf80aae1f/init/0.log" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.080916 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2_b7210463-d869-4f1c-8d7b-60985c525f58/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.126712 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.127426 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-82n9c_4c978f3a-f7d2-4a33-a206-b38bf80aae1f/dnsmasq-dns/0.log" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.186319 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-utilities\") pod \"69354491-d341-499d-a5d0-5b3ff22f3808\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.186361 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-catalog-content\") pod \"69354491-d341-499d-a5d0-5b3ff22f3808\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.186401 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzlcv\" (UniqueName: \"kubernetes.io/projected/69354491-d341-499d-a5d0-5b3ff22f3808-kube-api-access-bzlcv\") pod \"69354491-d341-499d-a5d0-5b3ff22f3808\" (UID: \"69354491-d341-499d-a5d0-5b3ff22f3808\") " Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.189312 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-utilities" (OuterVolumeSpecName: "utilities") pod "69354491-d341-499d-a5d0-5b3ff22f3808" (UID: "69354491-d341-499d-a5d0-5b3ff22f3808"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.196655 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69354491-d341-499d-a5d0-5b3ff22f3808-kube-api-access-bzlcv" (OuterVolumeSpecName: "kube-api-access-bzlcv") pod "69354491-d341-499d-a5d0-5b3ff22f3808" (UID: "69354491-d341-499d-a5d0-5b3ff22f3808"). InnerVolumeSpecName "kube-api-access-bzlcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.220178 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69354491-d341-499d-a5d0-5b3ff22f3808" (UID: "69354491-d341-499d-a5d0-5b3ff22f3808"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.283352 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_dfd138cb-0c04-4683-9af6-b623fb39c84f/glance-httpd/0.log" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.289011 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.289205 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69354491-d341-499d-a5d0-5b3ff22f3808-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.289291 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzlcv\" (UniqueName: \"kubernetes.io/projected/69354491-d341-499d-a5d0-5b3ff22f3808-kube-api-access-bzlcv\") on node \"crc\" DevicePath \"\"" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.314271 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_dfd138cb-0c04-4683-9af6-b623fb39c84f/glance-log/0.log" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.365945 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_62fed6a9-a36d-485f-bedc-cb54f1dad363/glance-log/0.log" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.391883 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_62fed6a9-a36d-485f-bedc-cb54f1dad363/glance-httpd/0.log" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.614418 4982 generic.go:334] "Generic (PLEG): container finished" podID="69354491-d341-499d-a5d0-5b3ff22f3808" containerID="58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6" exitCode=0 Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.614465 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd269" event={"ID":"69354491-d341-499d-a5d0-5b3ff22f3808","Type":"ContainerDied","Data":"58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6"} Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.614484 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dd269" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.614520 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dd269" event={"ID":"69354491-d341-499d-a5d0-5b3ff22f3808","Type":"ContainerDied","Data":"cd11a3adab68598583a59f912d4caa470acb2e96c0b9443fbde303ee190b1c56"} Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.614549 4982 scope.go:117] "RemoveContainer" containerID="58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.635762 4982 scope.go:117] "RemoveContainer" containerID="2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.660653 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd269"] Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.669990 4982 scope.go:117] "RemoveContainer" containerID="d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.678425 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dd269"] Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.716969 4982 scope.go:117] "RemoveContainer" containerID="58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6" Feb 24 16:27:46 crc kubenswrapper[4982]: E0224 16:27:46.717363 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6\": container with ID starting with 58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6 not found: ID does not exist" containerID="58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.717405 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6"} err="failed to get container status \"58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6\": rpc error: code = NotFound desc = could not find container \"58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6\": container with ID starting with 58ff73a561e90344b27218c09e056656aa279d5f96fd6623b19679a3837e2df6 not found: ID does not exist" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.717433 4982 scope.go:117] "RemoveContainer" containerID="2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45" Feb 24 16:27:46 crc kubenswrapper[4982]: E0224 16:27:46.719855 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45\": container with ID starting with 2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45 not found: ID does not exist" containerID="2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.719889 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45"} err="failed to get container status \"2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45\": rpc error: code = NotFound desc = could not find container \"2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45\": container with ID starting with 2c8e932166c3e17cd0524aecac2409b27c9fc93cd01680ea5558ac5f5096fc45 not found: ID does not exist" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.719909 4982 scope.go:117] "RemoveContainer" containerID="d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187" Feb 24 16:27:46 crc kubenswrapper[4982]: E0224 16:27:46.727628 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187\": container with ID starting with d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187 not found: ID does not exist" containerID="d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.727677 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187"} err="failed to get container status \"d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187\": rpc error: code = NotFound desc = could not find container \"d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187\": container with ID starting with d8d312495685822fdf6077db2dfc9cb871b33054aae421b51cf1a9436224e187 not found: ID does not exist" Feb 24 16:27:46 crc kubenswrapper[4982]: I0224 16:27:46.993280 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv_6a256c5e-deb6-4688-a9d7-080502e07609/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.097744 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-847b6bf986-6knzb_ea93c40b-bf1e-433e-8782-fcb3781962a7/heat-engine/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.163830 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" path="/var/lib/kubelet/pods/69354491-d341-499d-a5d0-5b3ff22f3808/volumes" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.210576 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2zfwl_1b676930-50c4-4caa-aeba-85814ae02e3a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.231871 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-779fc6f99c-lz6wq_f7257c07-0ab1-4030-9c40-c942d0de78f9/heat-api/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.333865 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-66f8dcd8d5-cmr9n_533504d0-9d95-4bbd-8c8e-10696cd115a6/heat-cfnapi/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.462891 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29532481-7srlf_e79089ba-59a1-4b26-b33f-9f4b5406d41e/keystone-cron/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.649607 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ddb94506-62fc-4912-92e6-6d37a079eba1/kube-state-metrics/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.737728 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx_1b824030-74f0-4482-b022-6c9cc5e52aac/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.799055 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-66dcfd46c4-xzlpx_69f6a530-6cab-4af8-a122-2648213a4c8b/keystone-api/0.log" Feb 24 16:27:47 crc kubenswrapper[4982]: I0224 16:27:47.867041 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-wx72q_dd768711-30f7-4520-845c-c2f7c45a7c6b/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:48 crc kubenswrapper[4982]: I0224 16:27:48.057827 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_920bb33f-bbf4-4a58-bfac-ce0d9eda6001/mysqld-exporter/0.log" Feb 24 16:27:48 crc kubenswrapper[4982]: I0224 16:27:48.380614 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn_c374a6b9-31c3-45f7-a188-ec0dc5df244d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:48 crc kubenswrapper[4982]: I0224 16:27:48.431384 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f7b97458f-lsgqv_ace3b91f-7d2e-405d-a191-1260b2def481/neutron-httpd/0.log" Feb 24 16:27:48 crc kubenswrapper[4982]: I0224 16:27:48.475281 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f7b97458f-lsgqv_ace3b91f-7d2e-405d-a191-1260b2def481/neutron-api/0.log" Feb 24 16:27:49 crc kubenswrapper[4982]: I0224 16:27:49.171269 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_802ee268-f1cc-4138-8361-c61c4b2d005a/nova-cell0-conductor-conductor/0.log" Feb 24 16:27:49 crc kubenswrapper[4982]: I0224 16:27:49.580550 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_67c35239-eb72-4ef1-b22a-be4ea3374b3c/nova-cell1-conductor-conductor/0.log" Feb 24 16:27:49 crc kubenswrapper[4982]: I0224 16:27:49.593177 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aac85fdb-fa1c-47c7-8904-d72aa10f69ae/nova-api-log/0.log" Feb 24 16:27:49 crc kubenswrapper[4982]: I0224 16:27:49.837003 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_19052f4e-3c96-4cf0-82f6-be3740f6a857/nova-cell1-novncproxy-novncproxy/0.log" Feb 24 16:27:49 crc kubenswrapper[4982]: I0224 16:27:49.848111 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qjzqt_de5e4e7c-74db-408c-bad2-18f9d3bdfeb7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:49 crc kubenswrapper[4982]: I0224 16:27:49.937595 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aac85fdb-fa1c-47c7-8904-d72aa10f69ae/nova-api-api/0.log" Feb 24 16:27:50 crc kubenswrapper[4982]: I0224 16:27:50.159228 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aa8b9e96-e44a-4a46-87c6-0a473fc97e22/nova-metadata-log/0.log" Feb 24 16:27:50 crc kubenswrapper[4982]: I0224 16:27:50.395924 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_92994661-7f5e-4171-9e13-f725269e3475/mysql-bootstrap/0.log" Feb 24 16:27:50 crc kubenswrapper[4982]: I0224 16:27:50.514246 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0c763ae1-e628-4088-b3da-1a4392f1cb37/nova-scheduler-scheduler/0.log" Feb 24 16:27:50 crc kubenswrapper[4982]: I0224 16:27:50.625858 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_92994661-7f5e-4171-9e13-f725269e3475/mysql-bootstrap/0.log" Feb 24 16:27:50 crc kubenswrapper[4982]: I0224 16:27:50.650441 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_92994661-7f5e-4171-9e13-f725269e3475/galera/0.log" Feb 24 16:27:50 crc kubenswrapper[4982]: I0224 16:27:50.835758 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e9dd965-1448-4801-9871-b6d949a7e1e7/mysql-bootstrap/0.log" Feb 24 16:27:51 crc kubenswrapper[4982]: I0224 16:27:51.046617 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e9dd965-1448-4801-9871-b6d949a7e1e7/mysql-bootstrap/0.log" Feb 24 16:27:51 crc kubenswrapper[4982]: I0224 16:27:51.159464 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e9dd965-1448-4801-9871-b6d949a7e1e7/galera/0.log" Feb 24 16:27:51 crc kubenswrapper[4982]: I0224 16:27:51.268666 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_546fb62a-ffa6-4067-b267-ea1ff18ee76e/openstackclient/0.log" Feb 24 16:27:51 crc kubenswrapper[4982]: I0224 16:27:51.349657 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xf5bb_6846e57a-0d17-4fd1-b470-5923690ad622/openstack-network-exporter/0.log" Feb 24 16:27:51 crc kubenswrapper[4982]: I0224 16:27:51.579814 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsdm4_8534c002-8446-4b80-ae93-6d529c59d1df/ovsdb-server-init/0.log" Feb 24 16:27:51 crc kubenswrapper[4982]: I0224 16:27:51.772561 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsdm4_8534c002-8446-4b80-ae93-6d529c59d1df/ovs-vswitchd/0.log" Feb 24 16:27:51 crc kubenswrapper[4982]: I0224 16:27:51.776628 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsdm4_8534c002-8446-4b80-ae93-6d529c59d1df/ovsdb-server-init/0.log" Feb 24 16:27:51 crc kubenswrapper[4982]: I0224 16:27:51.804272 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsdm4_8534c002-8446-4b80-ae93-6d529c59d1df/ovsdb-server/0.log" Feb 24 16:27:52 crc kubenswrapper[4982]: I0224 16:27:52.044115 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xcvbd_2c21939f-82d2-4553-acbd-b570e4d1527c/ovn-controller/0.log" Feb 24 16:27:52 crc kubenswrapper[4982]: I0224 16:27:52.057699 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aa8b9e96-e44a-4a46-87c6-0a473fc97e22/nova-metadata-metadata/0.log" Feb 24 16:27:52 crc kubenswrapper[4982]: I0224 16:27:52.274888 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_86cf4035-92fa-4d4d-9ce7-ca961da212c2/openstack-network-exporter/0.log" Feb 24 16:27:52 crc kubenswrapper[4982]: I0224 16:27:52.285175 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-srvk7_1646ea82-1708-4dbe-b6af-7cc0c9fe35b1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:52 crc kubenswrapper[4982]: I0224 16:27:52.303027 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_86cf4035-92fa-4d4d-9ce7-ca961da212c2/ovn-northd/0.log" Feb 24 16:27:52 crc kubenswrapper[4982]: I0224 16:27:52.500512 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_83806fad-ace2-4023-9023-88d534d78650/openstack-network-exporter/0.log" Feb 24 16:27:52 crc kubenswrapper[4982]: I0224 16:27:52.564619 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_83806fad-ace2-4023-9023-88d534d78650/ovsdbserver-nb/0.log" Feb 24 16:27:52 crc kubenswrapper[4982]: I0224 16:27:52.935157 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_43d8feff-fc48-4fcc-86f8-ce96094eded1/ovsdbserver-sb/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.022061 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_43d8feff-fc48-4fcc-86f8-ce96094eded1/openstack-network-exporter/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.259814 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6557c4cbb4-n6w8z_26e32ece-d925-472d-91cd-db19b7bbb9ed/placement-api/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.384569 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6557c4cbb4-n6w8z_26e32ece-d925-472d-91cd-db19b7bbb9ed/placement-log/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.401095 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/init-config-reloader/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.592923 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/init-config-reloader/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.620332 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/prometheus/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.646933 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/thanos-sidecar/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.684691 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/config-reloader/0.log" Feb 24 16:27:53 crc kubenswrapper[4982]: I0224 16:27:53.861820 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fc18fffb-2c78-4097-8145-143bf44b11dc/setup-container/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.033258 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fc18fffb-2c78-4097-8145-143bf44b11dc/rabbitmq/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.136750 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fc18fffb-2c78-4097-8145-143bf44b11dc/setup-container/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.174103 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec0c9d67-9dca-4bd7-bd58-fa6185479916/setup-container/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.395888 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec0c9d67-9dca-4bd7-bd58-fa6185479916/setup-container/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.448995 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec0c9d67-9dca-4bd7-bd58-fa6185479916/rabbitmq/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.480385 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_482245c3-03a8-4890-a48d-b234c5c78c3a/setup-container/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.686117 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_482245c3-03a8-4890-a48d-b234c5c78c3a/setup-container/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.694433 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_482245c3-03a8-4890-a48d-b234c5c78c3a/rabbitmq/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.726127 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b68b4733-09b0-4fff-b032-f3339306a04d/setup-container/0.log" Feb 24 16:27:54 crc kubenswrapper[4982]: I0224 16:27:54.943287 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b68b4733-09b0-4fff-b032-f3339306a04d/setup-container/0.log" Feb 24 16:27:55 crc kubenswrapper[4982]: I0224 16:27:55.014817 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb_905ac17f-4256-40f9-b638-454713515dc3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:55 crc kubenswrapper[4982]: I0224 16:27:55.139307 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b68b4733-09b0-4fff-b032-f3339306a04d/rabbitmq/0.log" Feb 24 16:27:55 crc kubenswrapper[4982]: I0224 16:27:55.323878 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7g4bk_f0dc1e67-72c6-406f-ae09-eb8089da0840/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:55 crc kubenswrapper[4982]: I0224 16:27:55.370438 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh_f3a743f1-a8e5-4df7-a11d-60606242903e/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:55 crc kubenswrapper[4982]: I0224 16:27:55.575566 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rlds8_d53e94d7-cbbe-429c-8b0e-98bb41fb3dda/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:55 crc kubenswrapper[4982]: I0224 16:27:55.624331 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jb627_71c06248-ae2a-4b15-9774-c3f18e1e61cb/ssh-known-hosts-edpm-deployment/0.log" Feb 24 16:27:55 crc kubenswrapper[4982]: I0224 16:27:55.884973 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68cc98f8f7-kjwtc_c89910f5-6c21-4f91-a07f-5b17503b3882/proxy-server/0.log" Feb 24 16:27:56 crc kubenswrapper[4982]: I0224 16:27:56.010424 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68cc98f8f7-kjwtc_c89910f5-6c21-4f91-a07f-5b17503b3882/proxy-httpd/0.log" Feb 24 16:27:56 crc kubenswrapper[4982]: I0224 16:27:56.024339 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cwcwd_e5321445-9e2b-44c7-9975-2bfe929ead53/swift-ring-rebalance/0.log" Feb 24 16:27:56 crc kubenswrapper[4982]: I0224 16:27:56.172790 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/account-auditor/0.log" Feb 24 16:27:56 crc kubenswrapper[4982]: I0224 16:27:56.229604 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/account-reaper/0.log" Feb 24 16:27:56 crc kubenswrapper[4982]: I0224 16:27:56.318603 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/account-replicator/0.log" Feb 24 16:27:56 crc kubenswrapper[4982]: I0224 16:27:56.411555 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/container-auditor/0.log" Feb 24 16:27:56 crc kubenswrapper[4982]: I0224 16:27:56.417926 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/account-server/0.log" Feb 24 16:27:56 crc kubenswrapper[4982]: I0224 16:27:56.462295 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/container-replicator/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.017370 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/container-server/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.019347 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/container-updater/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.060873 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-auditor/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.346529 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-expirer/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.531343 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-updater/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.549056 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-replicator/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.560028 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-server/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.592815 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/rsync/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.754202 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/swift-recon-cron/0.log" Feb 24 16:27:57 crc kubenswrapper[4982]: I0224 16:27:57.940217 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh_d376d1ae-0d9e-457a-97c3-dce655164119/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:58 crc kubenswrapper[4982]: I0224 16:27:58.065892 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6_eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:58 crc kubenswrapper[4982]: I0224 16:27:58.281965 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_92ce340a-a0e5-4ab6-984a-17b5b02bfa0a/test-operator-logs-container/0.log" Feb 24 16:27:58 crc kubenswrapper[4982]: I0224 16:27:58.484717 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk_514b9bdd-6644-4170-bd57-2c6b1073b9cb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:27:59 crc kubenswrapper[4982]: I0224 16:27:59.118944 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9/tempest-tests-tempest-tests-runner/0.log" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.154083 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532508-2654s"] Feb 24 16:28:00 crc kubenswrapper[4982]: E0224 16:28:00.154891 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" containerName="extract-content" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.154904 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" containerName="extract-content" Feb 24 16:28:00 crc kubenswrapper[4982]: E0224 16:28:00.154928 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" containerName="extract-utilities" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.154935 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" containerName="extract-utilities" Feb 24 16:28:00 crc kubenswrapper[4982]: E0224 16:28:00.154959 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" containerName="registry-server" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.154964 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" containerName="registry-server" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.155182 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="69354491-d341-499d-a5d0-5b3ff22f3808" containerName="registry-server" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.156060 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532508-2654s" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.158615 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.158779 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.159022 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.214695 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532508-2654s"] Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.225789 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krlz2\" (UniqueName: \"kubernetes.io/projected/0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898-kube-api-access-krlz2\") pod \"auto-csr-approver-29532508-2654s\" (UID: \"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898\") " pod="openshift-infra/auto-csr-approver-29532508-2654s" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.327141 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krlz2\" (UniqueName: \"kubernetes.io/projected/0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898-kube-api-access-krlz2\") pod \"auto-csr-approver-29532508-2654s\" (UID: \"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898\") " pod="openshift-infra/auto-csr-approver-29532508-2654s" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.361369 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krlz2\" (UniqueName: \"kubernetes.io/projected/0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898-kube-api-access-krlz2\") pod \"auto-csr-approver-29532508-2654s\" (UID: \"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898\") " pod="openshift-infra/auto-csr-approver-29532508-2654s" Feb 24 16:28:00 crc kubenswrapper[4982]: I0224 16:28:00.522898 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532508-2654s" Feb 24 16:28:01 crc kubenswrapper[4982]: I0224 16:28:01.732774 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532508-2654s"] Feb 24 16:28:01 crc kubenswrapper[4982]: I0224 16:28:01.785224 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532508-2654s" event={"ID":"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898","Type":"ContainerStarted","Data":"96209368c7b404babd7de04883bc5303bb34df0120ede0dbeabde88a59c952b1"} Feb 24 16:28:03 crc kubenswrapper[4982]: I0224 16:28:03.825048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532508-2654s" event={"ID":"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898","Type":"ContainerStarted","Data":"06df4022a010a9b0a008a60620a14250e5bd337d7c06d1652f1e410c2e7b4d3d"} Feb 24 16:28:03 crc kubenswrapper[4982]: I0224 16:28:03.854179 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532508-2654s" podStartSLOduration=2.947701895 podStartE2EDuration="3.854157225s" podCreationTimestamp="2026-02-24 16:28:00 +0000 UTC" firstStartedPulling="2026-02-24 16:28:01.73813572 +0000 UTC m=+5943.357194213" lastFinishedPulling="2026-02-24 16:28:02.64459105 +0000 UTC m=+5944.263649543" observedRunningTime="2026-02-24 16:28:03.842649751 +0000 UTC m=+5945.461708254" watchObservedRunningTime="2026-02-24 16:28:03.854157225 +0000 UTC m=+5945.473215718" Feb 24 16:28:04 crc kubenswrapper[4982]: I0224 16:28:04.649933 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_01f84659-7499-464a-9476-fddfca0dec8a/memcached/0.log" Feb 24 16:28:04 crc kubenswrapper[4982]: I0224 16:28:04.835840 4982 generic.go:334] "Generic (PLEG): container finished" podID="0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898" containerID="06df4022a010a9b0a008a60620a14250e5bd337d7c06d1652f1e410c2e7b4d3d" exitCode=0 Feb 24 16:28:04 crc kubenswrapper[4982]: I0224 16:28:04.835887 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532508-2654s" event={"ID":"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898","Type":"ContainerDied","Data":"06df4022a010a9b0a008a60620a14250e5bd337d7c06d1652f1e410c2e7b4d3d"} Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.259861 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532508-2654s" Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.282652 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krlz2\" (UniqueName: \"kubernetes.io/projected/0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898-kube-api-access-krlz2\") pod \"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898\" (UID: \"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898\") " Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.292743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898-kube-api-access-krlz2" (OuterVolumeSpecName: "kube-api-access-krlz2") pod "0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898" (UID: "0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898"). InnerVolumeSpecName "kube-api-access-krlz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.386179 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krlz2\" (UniqueName: \"kubernetes.io/projected/0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898-kube-api-access-krlz2\") on node \"crc\" DevicePath \"\"" Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.855521 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532508-2654s" event={"ID":"0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898","Type":"ContainerDied","Data":"96209368c7b404babd7de04883bc5303bb34df0120ede0dbeabde88a59c952b1"} Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.855562 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96209368c7b404babd7de04883bc5303bb34df0120ede0dbeabde88a59c952b1" Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.855571 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532508-2654s" Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.908977 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532502-vskcc"] Feb 24 16:28:06 crc kubenswrapper[4982]: I0224 16:28:06.919933 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532502-vskcc"] Feb 24 16:28:07 crc kubenswrapper[4982]: I0224 16:28:07.159106 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50" path="/var/lib/kubelet/pods/0fcfa0ca-f10e-4538-8bec-0bf4e1e6fd50/volumes" Feb 24 16:28:29 crc kubenswrapper[4982]: I0224 16:28:29.879971 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/util/0.log" Feb 24 16:28:30 crc kubenswrapper[4982]: I0224 16:28:30.022922 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/util/0.log" Feb 24 16:28:30 crc kubenswrapper[4982]: I0224 16:28:30.058355 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/pull/0.log" Feb 24 16:28:30 crc kubenswrapper[4982]: I0224 16:28:30.072639 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/pull/0.log" Feb 24 16:28:30 crc kubenswrapper[4982]: I0224 16:28:30.264167 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/extract/0.log" Feb 24 16:28:30 crc kubenswrapper[4982]: I0224 16:28:30.285099 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/util/0.log" Feb 24 16:28:30 crc kubenswrapper[4982]: I0224 16:28:30.293121 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/pull/0.log" Feb 24 16:28:30 crc kubenswrapper[4982]: I0224 16:28:30.682096 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-rqq7v_b0f19215-2346-4a5a-8b4a-30f19af5db6c/manager/0.log" Feb 24 16:28:31 crc kubenswrapper[4982]: I0224 16:28:31.031680 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-8dxfx_e07053fe-da4d-437b-b884-659d18acc903/manager/0.log" Feb 24 16:28:31 crc kubenswrapper[4982]: I0224 16:28:31.284780 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-4fbcm_84e2fa7b-8efb-4e72-b6a4-42b10ab15984/manager/0.log" Feb 24 16:28:31 crc kubenswrapper[4982]: I0224 16:28:31.457429 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-sl785_7da568d1-374c-4687-8724-ceee9b3857a7/manager/0.log" Feb 24 16:28:32 crc kubenswrapper[4982]: I0224 16:28:32.610392 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-f5stb_5da22390-3c90-4096-8f6b-ac0f8feb4f46/manager/0.log" Feb 24 16:28:32 crc kubenswrapper[4982]: I0224 16:28:32.969606 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-pcpml_7dbd2798-1deb-4014-9bad-8446f47f49e8/manager/0.log" Feb 24 16:28:33 crc kubenswrapper[4982]: I0224 16:28:33.029480 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-zmxj9_5602df8b-a253-42b4-8b7d-93a3a793fa2a/manager/0.log" Feb 24 16:28:33 crc kubenswrapper[4982]: I0224 16:28:33.183758 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-bqhfw_c863f339-9142-4edc-b547-9bf0fd0d64bc/manager/0.log" Feb 24 16:28:33 crc kubenswrapper[4982]: I0224 16:28:33.279409 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-xk8mt_cdca3167-b0ff-41e3-8802-02d92f829aff/manager/0.log" Feb 24 16:28:33 crc kubenswrapper[4982]: I0224 16:28:33.465461 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-fwqhl_12f6eea3-aefd-485f-a582-af40549cefa0/manager/0.log" Feb 24 16:28:34 crc kubenswrapper[4982]: I0224 16:28:34.406030 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-l8xls_5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7/manager/0.log" Feb 24 16:28:34 crc kubenswrapper[4982]: I0224 16:28:34.539996 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-247rw_d6146e6e-9a66-43aa-803c-df072ec31d11/manager/0.log" Feb 24 16:28:34 crc kubenswrapper[4982]: I0224 16:28:34.660097 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-rgfzx_c0967978-25a6-416a-81be-1153d5f5f74b/manager/0.log" Feb 24 16:28:34 crc kubenswrapper[4982]: I0224 16:28:34.711984 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c7st96_c4924244-1803-429b-9c50-8a5c33b1f1b6/manager/0.log" Feb 24 16:28:35 crc kubenswrapper[4982]: I0224 16:28:35.155116 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f8d897576-j27pk_26238893-6a15-42a9-ae76-3c1c9aa798ee/operator/0.log" Feb 24 16:28:35 crc kubenswrapper[4982]: I0224 16:28:35.210580 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-t7nqk_8c774d84-399c-417b-a926-981419902625/registry-server/0.log" Feb 24 16:28:35 crc kubenswrapper[4982]: I0224 16:28:35.456732 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-pmh6c_d7687408-a30d-42c8-826f-759659e87262/manager/0.log" Feb 24 16:28:35 crc kubenswrapper[4982]: I0224 16:28:35.609406 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-j2q85_2ed106e6-c770-4724-a803-29b4d1b74b6b/manager/0.log" Feb 24 16:28:35 crc kubenswrapper[4982]: I0224 16:28:35.739720 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-w8g8f_f03d42ea-ad31-451a-99a7-c1ecc595f924/operator/0.log" Feb 24 16:28:35 crc kubenswrapper[4982]: I0224 16:28:35.939075 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-xzh2k_8670907a-5fad-4602-8578-5eb1a19d1b44/manager/0.log" Feb 24 16:28:36 crc kubenswrapper[4982]: I0224 16:28:36.260667 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-qhlv4_95e748d2-45c9-4279-b1b4-9a0d18dce523/manager/0.log" Feb 24 16:28:36 crc kubenswrapper[4982]: I0224 16:28:36.559100 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-pjtlt_c45b9d77-86d6-4763-b9aa-44549e04016a/manager/0.log" Feb 24 16:28:36 crc kubenswrapper[4982]: I0224 16:28:36.615701 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6c7fcb66df-f9pl4_d4d8baf6-e8b2-4a05-b73e-ca563c3bb172/manager/0.log" Feb 24 16:28:37 crc kubenswrapper[4982]: I0224 16:28:37.046560 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58cc7d798f-pqcz2_ee761700-7a4b-4f96-8f99-31c55ed51962/manager/0.log" Feb 24 16:28:42 crc kubenswrapper[4982]: I0224 16:28:42.360889 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-xdgp8_b5bb21b9-5878-4960-9bd6-f46b48419f59/manager/0.log" Feb 24 16:28:44 crc kubenswrapper[4982]: I0224 16:28:44.699252 4982 trace.go:236] Trace[152530501]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (24-Feb-2026 16:28:43.661) (total time: 1036ms): Feb 24 16:28:44 crc kubenswrapper[4982]: Trace[152530501]: [1.036177887s] [1.036177887s] END Feb 24 16:28:55 crc kubenswrapper[4982]: I0224 16:28:55.289476 4982 scope.go:117] "RemoveContainer" containerID="a0b116586568ee2280be5e7efabbedafb1fc9a3cf0c24a295e68299a6899eebd" Feb 24 16:28:59 crc kubenswrapper[4982]: I0224 16:28:59.936462 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-b6dkb_8dd5c785-b167-4b52-8c16-4eea0fcb5685/control-plane-machine-set-operator/0.log" Feb 24 16:29:00 crc kubenswrapper[4982]: I0224 16:29:00.118400 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5sdrm_1b0d00bf-0cb6-4fa2-9561-edafa4a10082/kube-rbac-proxy/0.log" Feb 24 16:29:00 crc kubenswrapper[4982]: I0224 16:29:00.142515 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5sdrm_1b0d00bf-0cb6-4fa2-9561-edafa4a10082/machine-api-operator/0.log" Feb 24 16:29:08 crc kubenswrapper[4982]: I0224 16:29:08.737864 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:29:08 crc kubenswrapper[4982]: I0224 16:29:08.738145 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:29:14 crc kubenswrapper[4982]: I0224 16:29:14.538360 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ll6lf_d543e947-1fd6-4253-84c8-5dd81a835ba4/cert-manager-controller/0.log" Feb 24 16:29:14 crc kubenswrapper[4982]: I0224 16:29:14.720765 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4zfz6_8422a1e6-92b5-4b34-a360-004609a25ac0/cert-manager-webhook/0.log" Feb 24 16:29:14 crc kubenswrapper[4982]: I0224 16:29:14.749194 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mpsc5_906e9aec-8f03-4230-8b2b-01459a8c2fcc/cert-manager-cainjector/0.log" Feb 24 16:29:29 crc kubenswrapper[4982]: I0224 16:29:29.430264 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-kqblm_f0fec11a-acd6-4eb3-9019-2ecdd41eccf3/nmstate-console-plugin/0.log" Feb 24 16:29:29 crc kubenswrapper[4982]: I0224 16:29:29.614408 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8k954_42eaa5bc-682b-40c1-ace7-12acd0a45032/nmstate-handler/0.log" Feb 24 16:29:29 crc kubenswrapper[4982]: I0224 16:29:29.699597 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7rt8q_1e0416f6-ebc9-4a01-a69a-904aab8b4cbb/kube-rbac-proxy/0.log" Feb 24 16:29:29 crc kubenswrapper[4982]: I0224 16:29:29.775944 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7rt8q_1e0416f6-ebc9-4a01-a69a-904aab8b4cbb/nmstate-metrics/0.log" Feb 24 16:29:29 crc kubenswrapper[4982]: I0224 16:29:29.850950 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-lj9wc_9d9e0bf3-ed22-416c-b672-8df43d3014c0/nmstate-operator/0.log" Feb 24 16:29:29 crc kubenswrapper[4982]: I0224 16:29:29.967170 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-vfvxq_ef265381-af0c-4642-93cb-075344e3650c/nmstate-webhook/0.log" Feb 24 16:29:38 crc kubenswrapper[4982]: I0224 16:29:38.738690 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:29:38 crc kubenswrapper[4982]: I0224 16:29:38.739615 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:29:46 crc kubenswrapper[4982]: I0224 16:29:46.778121 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4467cd99-kv4ps_9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c/kube-rbac-proxy/0.log" Feb 24 16:29:46 crc kubenswrapper[4982]: I0224 16:29:46.800623 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4467cd99-kv4ps_9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c/manager/0.log" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.154713 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532510-wlx6h"] Feb 24 16:30:00 crc kubenswrapper[4982]: E0224 16:30:00.155669 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898" containerName="oc" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.155681 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898" containerName="oc" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.155922 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898" containerName="oc" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.156776 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532510-wlx6h" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.159406 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.159607 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.160044 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.165776 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28"] Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.169492 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.174380 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.178948 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.214536 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28"] Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.223889 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghq8\" (UniqueName: \"kubernetes.io/projected/02f297b1-0ddb-4703-abb9-329c0dd5863d-kube-api-access-sghq8\") pod \"auto-csr-approver-29532510-wlx6h\" (UID: \"02f297b1-0ddb-4703-abb9-329c0dd5863d\") " pod="openshift-infra/auto-csr-approver-29532510-wlx6h" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.225658 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532510-wlx6h"] Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.325927 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sghq8\" (UniqueName: \"kubernetes.io/projected/02f297b1-0ddb-4703-abb9-329c0dd5863d-kube-api-access-sghq8\") pod \"auto-csr-approver-29532510-wlx6h\" (UID: \"02f297b1-0ddb-4703-abb9-329c0dd5863d\") " pod="openshift-infra/auto-csr-approver-29532510-wlx6h" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.326046 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39efb95e-30e3-4173-806d-4b3d5ecaab02-secret-volume\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.326077 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39efb95e-30e3-4173-806d-4b3d5ecaab02-config-volume\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.326106 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phrb\" (UniqueName: \"kubernetes.io/projected/39efb95e-30e3-4173-806d-4b3d5ecaab02-kube-api-access-4phrb\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.388405 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghq8\" (UniqueName: \"kubernetes.io/projected/02f297b1-0ddb-4703-abb9-329c0dd5863d-kube-api-access-sghq8\") pod \"auto-csr-approver-29532510-wlx6h\" (UID: \"02f297b1-0ddb-4703-abb9-329c0dd5863d\") " pod="openshift-infra/auto-csr-approver-29532510-wlx6h" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.428843 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4phrb\" (UniqueName: \"kubernetes.io/projected/39efb95e-30e3-4173-806d-4b3d5ecaab02-kube-api-access-4phrb\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.429073 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39efb95e-30e3-4173-806d-4b3d5ecaab02-secret-volume\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.429095 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39efb95e-30e3-4173-806d-4b3d5ecaab02-config-volume\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.430688 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39efb95e-30e3-4173-806d-4b3d5ecaab02-config-volume\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.436537 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39efb95e-30e3-4173-806d-4b3d5ecaab02-secret-volume\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.456103 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4phrb\" (UniqueName: \"kubernetes.io/projected/39efb95e-30e3-4173-806d-4b3d5ecaab02-kube-api-access-4phrb\") pod \"collect-profiles-29532510-qmc28\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.478877 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532510-wlx6h" Feb 24 16:30:00 crc kubenswrapper[4982]: I0224 16:30:00.490829 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:01 crc kubenswrapper[4982]: I0224 16:30:01.027069 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28"] Feb 24 16:30:01 crc kubenswrapper[4982]: I0224 16:30:01.128802 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532510-wlx6h"] Feb 24 16:30:01 crc kubenswrapper[4982]: W0224 16:30:01.142147 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02f297b1_0ddb_4703_abb9_329c0dd5863d.slice/crio-6215716ff193c599905580a69b99b7b32ec0b433755d7508172ea5dae5f75484 WatchSource:0}: Error finding container 6215716ff193c599905580a69b99b7b32ec0b433755d7508172ea5dae5f75484: Status 404 returned error can't find the container with id 6215716ff193c599905580a69b99b7b32ec0b433755d7508172ea5dae5f75484 Feb 24 16:30:01 crc kubenswrapper[4982]: I0224 16:30:01.149829 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:30:01 crc kubenswrapper[4982]: I0224 16:30:01.241940 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" event={"ID":"39efb95e-30e3-4173-806d-4b3d5ecaab02","Type":"ContainerStarted","Data":"6bdd632a700b989d929e902532b46c3a8685c142f526d1f18c450f589d58d593"} Feb 24 16:30:01 crc kubenswrapper[4982]: I0224 16:30:01.243605 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532510-wlx6h" event={"ID":"02f297b1-0ddb-4703-abb9-329c0dd5863d","Type":"ContainerStarted","Data":"6215716ff193c599905580a69b99b7b32ec0b433755d7508172ea5dae5f75484"} Feb 24 16:30:02 crc kubenswrapper[4982]: I0224 16:30:02.167004 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-wjv8g_5d19a6f9-587e-42fc-8dd5-1a363bac4c09/prometheus-operator/0.log" Feb 24 16:30:02 crc kubenswrapper[4982]: I0224 16:30:02.253712 4982 generic.go:334] "Generic (PLEG): container finished" podID="39efb95e-30e3-4173-806d-4b3d5ecaab02" containerID="8633e4074b252aad0f48f44508ef811d05b46254674447b4ef01c633db616e84" exitCode=0 Feb 24 16:30:02 crc kubenswrapper[4982]: I0224 16:30:02.254453 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" event={"ID":"39efb95e-30e3-4173-806d-4b3d5ecaab02","Type":"ContainerDied","Data":"8633e4074b252aad0f48f44508ef811d05b46254674447b4ef01c633db616e84"} Feb 24 16:30:02 crc kubenswrapper[4982]: I0224 16:30:02.373944 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb_14ead058-d4ed-4e55-9632-a5e2f571b469/prometheus-operator-admission-webhook/0.log" Feb 24 16:30:02 crc kubenswrapper[4982]: I0224 16:30:02.394535 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc_ca2d2e72-fd73-4ad0-8d81-718235c7f891/prometheus-operator-admission-webhook/0.log" Feb 24 16:30:02 crc kubenswrapper[4982]: I0224 16:30:02.611665 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m6fxj_10e410a1-e886-451e-9cfc-40f6812a4d0d/observability-ui-dashboards/0.log" Feb 24 16:30:02 crc kubenswrapper[4982]: I0224 16:30:02.614190 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9bmsh_8186e569-67ca-4273-9de2-130ffd7dcf09/operator/0.log" Feb 24 16:30:02 crc kubenswrapper[4982]: I0224 16:30:02.806977 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4hqxq_ef281ee3-4742-4dd3-947f-32c7f039f5ec/perses-operator/0.log" Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.716282 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.801685 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4phrb\" (UniqueName: \"kubernetes.io/projected/39efb95e-30e3-4173-806d-4b3d5ecaab02-kube-api-access-4phrb\") pod \"39efb95e-30e3-4173-806d-4b3d5ecaab02\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.801900 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39efb95e-30e3-4173-806d-4b3d5ecaab02-config-volume\") pod \"39efb95e-30e3-4173-806d-4b3d5ecaab02\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.802195 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39efb95e-30e3-4173-806d-4b3d5ecaab02-secret-volume\") pod \"39efb95e-30e3-4173-806d-4b3d5ecaab02\" (UID: \"39efb95e-30e3-4173-806d-4b3d5ecaab02\") " Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.803462 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39efb95e-30e3-4173-806d-4b3d5ecaab02-config-volume" (OuterVolumeSpecName: "config-volume") pod "39efb95e-30e3-4173-806d-4b3d5ecaab02" (UID: "39efb95e-30e3-4173-806d-4b3d5ecaab02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.806443 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39efb95e-30e3-4173-806d-4b3d5ecaab02-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.810782 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39efb95e-30e3-4173-806d-4b3d5ecaab02-kube-api-access-4phrb" (OuterVolumeSpecName: "kube-api-access-4phrb") pod "39efb95e-30e3-4173-806d-4b3d5ecaab02" (UID: "39efb95e-30e3-4173-806d-4b3d5ecaab02"). InnerVolumeSpecName "kube-api-access-4phrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.830506 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39efb95e-30e3-4173-806d-4b3d5ecaab02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "39efb95e-30e3-4173-806d-4b3d5ecaab02" (UID: "39efb95e-30e3-4173-806d-4b3d5ecaab02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.908338 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39efb95e-30e3-4173-806d-4b3d5ecaab02-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 16:30:03 crc kubenswrapper[4982]: I0224 16:30:03.908618 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4phrb\" (UniqueName: \"kubernetes.io/projected/39efb95e-30e3-4173-806d-4b3d5ecaab02-kube-api-access-4phrb\") on node \"crc\" DevicePath \"\"" Feb 24 16:30:04 crc kubenswrapper[4982]: I0224 16:30:04.277485 4982 generic.go:334] "Generic (PLEG): container finished" podID="02f297b1-0ddb-4703-abb9-329c0dd5863d" containerID="864940e75182aeeadcd3ce7c93790f7618796fbef5f794f3edf7b68d8b619d99" exitCode=0 Feb 24 16:30:04 crc kubenswrapper[4982]: I0224 16:30:04.278007 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532510-wlx6h" event={"ID":"02f297b1-0ddb-4703-abb9-329c0dd5863d","Type":"ContainerDied","Data":"864940e75182aeeadcd3ce7c93790f7618796fbef5f794f3edf7b68d8b619d99"} Feb 24 16:30:04 crc kubenswrapper[4982]: I0224 16:30:04.279748 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" event={"ID":"39efb95e-30e3-4173-806d-4b3d5ecaab02","Type":"ContainerDied","Data":"6bdd632a700b989d929e902532b46c3a8685c142f526d1f18c450f589d58d593"} Feb 24 16:30:04 crc kubenswrapper[4982]: I0224 16:30:04.279773 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bdd632a700b989d929e902532b46c3a8685c142f526d1f18c450f589d58d593" Feb 24 16:30:04 crc kubenswrapper[4982]: I0224 16:30:04.279809 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532510-qmc28" Feb 24 16:30:04 crc kubenswrapper[4982]: I0224 16:30:04.807998 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n"] Feb 24 16:30:04 crc kubenswrapper[4982]: I0224 16:30:04.820931 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532465-45w9n"] Feb 24 16:30:05 crc kubenswrapper[4982]: I0224 16:30:05.177430 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f08c468-b7cc-4ca7-bc1c-750894e12286" path="/var/lib/kubelet/pods/7f08c468-b7cc-4ca7-bc1c-750894e12286/volumes" Feb 24 16:30:05 crc kubenswrapper[4982]: I0224 16:30:05.978197 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532510-wlx6h" Feb 24 16:30:06 crc kubenswrapper[4982]: I0224 16:30:06.064835 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sghq8\" (UniqueName: \"kubernetes.io/projected/02f297b1-0ddb-4703-abb9-329c0dd5863d-kube-api-access-sghq8\") pod \"02f297b1-0ddb-4703-abb9-329c0dd5863d\" (UID: \"02f297b1-0ddb-4703-abb9-329c0dd5863d\") " Feb 24 16:30:06 crc kubenswrapper[4982]: I0224 16:30:06.084211 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f297b1-0ddb-4703-abb9-329c0dd5863d-kube-api-access-sghq8" (OuterVolumeSpecName: "kube-api-access-sghq8") pod "02f297b1-0ddb-4703-abb9-329c0dd5863d" (UID: "02f297b1-0ddb-4703-abb9-329c0dd5863d"). InnerVolumeSpecName "kube-api-access-sghq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:30:06 crc kubenswrapper[4982]: I0224 16:30:06.167667 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sghq8\" (UniqueName: \"kubernetes.io/projected/02f297b1-0ddb-4703-abb9-329c0dd5863d-kube-api-access-sghq8\") on node \"crc\" DevicePath \"\"" Feb 24 16:30:06 crc kubenswrapper[4982]: I0224 16:30:06.301069 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532510-wlx6h" event={"ID":"02f297b1-0ddb-4703-abb9-329c0dd5863d","Type":"ContainerDied","Data":"6215716ff193c599905580a69b99b7b32ec0b433755d7508172ea5dae5f75484"} Feb 24 16:30:06 crc kubenswrapper[4982]: I0224 16:30:06.301123 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6215716ff193c599905580a69b99b7b32ec0b433755d7508172ea5dae5f75484" Feb 24 16:30:06 crc kubenswrapper[4982]: I0224 16:30:06.301190 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532510-wlx6h" Feb 24 16:30:07 crc kubenswrapper[4982]: I0224 16:30:07.027795 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532504-xrvjc"] Feb 24 16:30:07 crc kubenswrapper[4982]: I0224 16:30:07.038277 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532504-xrvjc"] Feb 24 16:30:07 crc kubenswrapper[4982]: I0224 16:30:07.158042 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1033e866-9d20-40b0-aad6-f651e6ae99a4" path="/var/lib/kubelet/pods/1033e866-9d20-40b0-aad6-f651e6ae99a4/volumes" Feb 24 16:30:08 crc kubenswrapper[4982]: I0224 16:30:08.744579 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:30:08 crc kubenswrapper[4982]: I0224 16:30:08.745082 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:30:08 crc kubenswrapper[4982]: I0224 16:30:08.745146 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 16:30:08 crc kubenswrapper[4982]: I0224 16:30:08.751457 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 16:30:08 crc kubenswrapper[4982]: I0224 16:30:08.751579 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" gracePeriod=600 Feb 24 16:30:09 crc kubenswrapper[4982]: E0224 16:30:09.307954 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:30:09 crc kubenswrapper[4982]: I0224 16:30:09.358229 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" exitCode=0 Feb 24 16:30:09 crc kubenswrapper[4982]: I0224 16:30:09.358523 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe"} Feb 24 16:30:09 crc kubenswrapper[4982]: I0224 16:30:09.358559 4982 scope.go:117] "RemoveContainer" containerID="3a897f4a77a414cc436dfc13cb27f56aa83c877cbc34f4775483201dc8bc5633" Feb 24 16:30:09 crc kubenswrapper[4982]: I0224 16:30:09.359529 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:30:09 crc kubenswrapper[4982]: E0224 16:30:09.359972 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.693869 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khb85"] Feb 24 16:30:10 crc kubenswrapper[4982]: E0224 16:30:10.694724 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f297b1-0ddb-4703-abb9-329c0dd5863d" containerName="oc" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.694739 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f297b1-0ddb-4703-abb9-329c0dd5863d" containerName="oc" Feb 24 16:30:10 crc kubenswrapper[4982]: E0224 16:30:10.694789 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39efb95e-30e3-4173-806d-4b3d5ecaab02" containerName="collect-profiles" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.694795 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="39efb95e-30e3-4173-806d-4b3d5ecaab02" containerName="collect-profiles" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.695070 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f297b1-0ddb-4703-abb9-329c0dd5863d" containerName="oc" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.695084 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="39efb95e-30e3-4173-806d-4b3d5ecaab02" containerName="collect-profiles" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.696824 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.706686 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khb85"] Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.796159 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-catalog-content\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.796221 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578tt\" (UniqueName: \"kubernetes.io/projected/f3e92941-5470-4fab-a165-7c55a0fa1a83-kube-api-access-578tt\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.797027 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-utilities\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.898849 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-catalog-content\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.898882 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578tt\" (UniqueName: \"kubernetes.io/projected/f3e92941-5470-4fab-a165-7c55a0fa1a83-kube-api-access-578tt\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.899020 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-utilities\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.899426 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-utilities\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:10 crc kubenswrapper[4982]: I0224 16:30:10.899554 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-catalog-content\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:11 crc kubenswrapper[4982]: I0224 16:30:11.462598 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578tt\" (UniqueName: \"kubernetes.io/projected/f3e92941-5470-4fab-a165-7c55a0fa1a83-kube-api-access-578tt\") pod \"community-operators-khb85\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:11 crc kubenswrapper[4982]: I0224 16:30:11.630264 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:12 crc kubenswrapper[4982]: I0224 16:30:12.255840 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khb85"] Feb 24 16:30:12 crc kubenswrapper[4982]: I0224 16:30:12.415882 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khb85" event={"ID":"f3e92941-5470-4fab-a165-7c55a0fa1a83","Type":"ContainerStarted","Data":"685685c70131db591a8f084808097f04d56227ed5706883b5946e67ae87025cf"} Feb 24 16:30:13 crc kubenswrapper[4982]: I0224 16:30:13.426617 4982 generic.go:334] "Generic (PLEG): container finished" podID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerID="48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a" exitCode=0 Feb 24 16:30:13 crc kubenswrapper[4982]: I0224 16:30:13.426659 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khb85" event={"ID":"f3e92941-5470-4fab-a165-7c55a0fa1a83","Type":"ContainerDied","Data":"48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a"} Feb 24 16:30:14 crc kubenswrapper[4982]: I0224 16:30:14.441545 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khb85" event={"ID":"f3e92941-5470-4fab-a165-7c55a0fa1a83","Type":"ContainerStarted","Data":"a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4"} Feb 24 16:30:16 crc kubenswrapper[4982]: I0224 16:30:16.482983 4982 generic.go:334] "Generic (PLEG): container finished" podID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerID="a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4" exitCode=0 Feb 24 16:30:16 crc kubenswrapper[4982]: I0224 16:30:16.483353 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khb85" event={"ID":"f3e92941-5470-4fab-a165-7c55a0fa1a83","Type":"ContainerDied","Data":"a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4"} Feb 24 16:30:17 crc kubenswrapper[4982]: I0224 16:30:17.520382 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khb85" event={"ID":"f3e92941-5470-4fab-a165-7c55a0fa1a83","Type":"ContainerStarted","Data":"c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f"} Feb 24 16:30:17 crc kubenswrapper[4982]: I0224 16:30:17.539560 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khb85" podStartSLOduration=4.102417007 podStartE2EDuration="7.539545406s" podCreationTimestamp="2026-02-24 16:30:10 +0000 UTC" firstStartedPulling="2026-02-24 16:30:13.42865404 +0000 UTC m=+6075.047712533" lastFinishedPulling="2026-02-24 16:30:16.865782439 +0000 UTC m=+6078.484840932" observedRunningTime="2026-02-24 16:30:17.536643757 +0000 UTC m=+6079.155702250" watchObservedRunningTime="2026-02-24 16:30:17.539545406 +0000 UTC m=+6079.158603899" Feb 24 16:30:20 crc kubenswrapper[4982]: I0224 16:30:20.146072 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:30:20 crc kubenswrapper[4982]: E0224 16:30:20.147312 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:30:20 crc kubenswrapper[4982]: I0224 16:30:20.919914 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-d86g4_dd40714b-1f28-413d-bfec-b2c20b09e12f/cluster-logging-operator/0.log" Feb 24 16:30:21 crc kubenswrapper[4982]: I0224 16:30:21.092669 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-vt6tk_fd36302e-5b75-4a73-ae39-e4a8e58f2682/collector/0.log" Feb 24 16:30:21 crc kubenswrapper[4982]: I0224 16:30:21.132886 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_8eb67e33-3f41-4046-930e-babb8b65f3cc/loki-compactor/0.log" Feb 24 16:30:21 crc kubenswrapper[4982]: I0224 16:30:21.303885 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-p6kh9_8fa63326-7a48-4c93-bad4-6ddb3d1d0731/loki-distributor/0.log" Feb 24 16:30:21 crc kubenswrapper[4982]: I0224 16:30:21.342108 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76767f4456-gmbbt_8a3d5174-0a86-43bc-bc05-a974c01aef1b/gateway/0.log" Feb 24 16:30:21 crc kubenswrapper[4982]: I0224 16:30:21.631173 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:21 crc kubenswrapper[4982]: I0224 16:30:21.631456 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:21 crc kubenswrapper[4982]: I0224 16:30:21.968469 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.041086 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76767f4456-gmbbt_8a3d5174-0a86-43bc-bc05-a974c01aef1b/opa/0.log" Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.115635 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76767f4456-hfv4z_f653b7f2-9a99-4426-b855-beb8dde56230/opa/0.log" Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.151898 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76767f4456-hfv4z_f653b7f2-9a99-4426-b855-beb8dde56230/gateway/0.log" Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.485704 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_3053cc27-e7fb-460e-84f8-92085a6aa8e5/loki-index-gateway/0.log" Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.626666 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.676023 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khb85"] Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.695636 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_f4e9d226-da8e-46e8-b378-5aafba527e2c/loki-ingester/0.log" Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.716524 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-cldqm_3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a/loki-querier/0.log" Feb 24 16:30:22 crc kubenswrapper[4982]: I0224 16:30:22.886936 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-jr57w_6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4/loki-query-frontend/0.log" Feb 24 16:30:24 crc kubenswrapper[4982]: I0224 16:30:24.597436 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khb85" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerName="registry-server" containerID="cri-o://c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f" gracePeriod=2 Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.114698 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.247859 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-utilities\") pod \"f3e92941-5470-4fab-a165-7c55a0fa1a83\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.248156 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578tt\" (UniqueName: \"kubernetes.io/projected/f3e92941-5470-4fab-a165-7c55a0fa1a83-kube-api-access-578tt\") pod \"f3e92941-5470-4fab-a165-7c55a0fa1a83\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.248246 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-catalog-content\") pod \"f3e92941-5470-4fab-a165-7c55a0fa1a83\" (UID: \"f3e92941-5470-4fab-a165-7c55a0fa1a83\") " Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.248570 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-utilities" (OuterVolumeSpecName: "utilities") pod "f3e92941-5470-4fab-a165-7c55a0fa1a83" (UID: "f3e92941-5470-4fab-a165-7c55a0fa1a83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.248840 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.267694 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e92941-5470-4fab-a165-7c55a0fa1a83-kube-api-access-578tt" (OuterVolumeSpecName: "kube-api-access-578tt") pod "f3e92941-5470-4fab-a165-7c55a0fa1a83" (UID: "f3e92941-5470-4fab-a165-7c55a0fa1a83"). InnerVolumeSpecName "kube-api-access-578tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.313813 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3e92941-5470-4fab-a165-7c55a0fa1a83" (UID: "f3e92941-5470-4fab-a165-7c55a0fa1a83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.350996 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578tt\" (UniqueName: \"kubernetes.io/projected/f3e92941-5470-4fab-a165-7c55a0fa1a83-kube-api-access-578tt\") on node \"crc\" DevicePath \"\"" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.351230 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e92941-5470-4fab-a165-7c55a0fa1a83-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.608853 4982 generic.go:334] "Generic (PLEG): container finished" podID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerID="c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f" exitCode=0 Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.608889 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khb85" event={"ID":"f3e92941-5470-4fab-a165-7c55a0fa1a83","Type":"ContainerDied","Data":"c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f"} Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.608913 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khb85" event={"ID":"f3e92941-5470-4fab-a165-7c55a0fa1a83","Type":"ContainerDied","Data":"685685c70131db591a8f084808097f04d56227ed5706883b5946e67ae87025cf"} Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.608931 4982 scope.go:117] "RemoveContainer" containerID="c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.609040 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khb85" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.658593 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khb85"] Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.664798 4982 scope.go:117] "RemoveContainer" containerID="a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.670352 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khb85"] Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.694348 4982 scope.go:117] "RemoveContainer" containerID="48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.768617 4982 scope.go:117] "RemoveContainer" containerID="c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f" Feb 24 16:30:25 crc kubenswrapper[4982]: E0224 16:30:25.769191 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f\": container with ID starting with c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f not found: ID does not exist" containerID="c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.769241 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f"} err="failed to get container status \"c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f\": rpc error: code = NotFound desc = could not find container \"c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f\": container with ID starting with c829714e3cf77df76c282c4b4cab720da62ab9b43868862a5463fe6e73e8709f not found: ID does not exist" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.769271 4982 scope.go:117] "RemoveContainer" containerID="a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4" Feb 24 16:30:25 crc kubenswrapper[4982]: E0224 16:30:25.769813 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4\": container with ID starting with a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4 not found: ID does not exist" containerID="a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.769866 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4"} err="failed to get container status \"a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4\": rpc error: code = NotFound desc = could not find container \"a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4\": container with ID starting with a0f12b21be1d559df1c71d6c5831651cb2a1d8ff4c93c3e6a456f480296037d4 not found: ID does not exist" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.769900 4982 scope.go:117] "RemoveContainer" containerID="48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a" Feb 24 16:30:25 crc kubenswrapper[4982]: E0224 16:30:25.770329 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a\": container with ID starting with 48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a not found: ID does not exist" containerID="48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a" Feb 24 16:30:25 crc kubenswrapper[4982]: I0224 16:30:25.770362 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a"} err="failed to get container status \"48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a\": rpc error: code = NotFound desc = could not find container \"48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a\": container with ID starting with 48b1676da4e25334fa1b0ac7a63791eb9f6bec85b20c680fc8fe3e2c911b8e3a not found: ID does not exist" Feb 24 16:30:27 crc kubenswrapper[4982]: I0224 16:30:27.162858 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" path="/var/lib/kubelet/pods/f3e92941-5470-4fab-a165-7c55a0fa1a83/volumes" Feb 24 16:30:33 crc kubenswrapper[4982]: I0224 16:30:33.146067 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:30:33 crc kubenswrapper[4982]: E0224 16:30:33.147057 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:30:38 crc kubenswrapper[4982]: I0224 16:30:38.324522 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-flpt4_59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad/kube-rbac-proxy/0.log" Feb 24 16:30:38 crc kubenswrapper[4982]: I0224 16:30:38.426063 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-flpt4_59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad/controller/0.log" Feb 24 16:30:38 crc kubenswrapper[4982]: I0224 16:30:38.674157 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-frr-files/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.119910 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-frr-files/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.155564 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-metrics/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.181307 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-reloader/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.183821 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-reloader/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.388392 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-frr-files/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.404315 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-metrics/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.436952 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-reloader/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.447899 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-metrics/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.632633 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-reloader/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.679264 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-metrics/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.688062 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-frr-files/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.721486 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/controller/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.911262 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/frr-metrics/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.954618 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/kube-rbac-proxy-frr/0.log" Feb 24 16:30:39 crc kubenswrapper[4982]: I0224 16:30:39.965525 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/kube-rbac-proxy/0.log" Feb 24 16:30:40 crc kubenswrapper[4982]: I0224 16:30:40.134679 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/reloader/0.log" Feb 24 16:30:40 crc kubenswrapper[4982]: I0224 16:30:40.231649 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-tszx8_dd5e975a-6b06-4ba5-a549-63843e3d9f41/frr-k8s-webhook-server/0.log" Feb 24 16:30:40 crc kubenswrapper[4982]: I0224 16:30:40.474774 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-686d7d6557-xttc8_bce5dbab-56c7-4132-aa32-b13ea1d81ada/manager/0.log" Feb 24 16:30:40 crc kubenswrapper[4982]: I0224 16:30:40.673525 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7474599d7f-769v9_eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb/webhook-server/0.log" Feb 24 16:30:40 crc kubenswrapper[4982]: I0224 16:30:40.842188 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z6xx7_c5421b10-e070-4cdd-a7b1-060d75642b50/kube-rbac-proxy/0.log" Feb 24 16:30:41 crc kubenswrapper[4982]: I0224 16:30:41.472265 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z6xx7_c5421b10-e070-4cdd-a7b1-060d75642b50/speaker/0.log" Feb 24 16:30:42 crc kubenswrapper[4982]: I0224 16:30:42.073554 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/frr/0.log" Feb 24 16:30:44 crc kubenswrapper[4982]: I0224 16:30:44.147017 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:30:44 crc kubenswrapper[4982]: E0224 16:30:44.147920 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:30:55 crc kubenswrapper[4982]: I0224 16:30:55.146494 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:30:55 crc kubenswrapper[4982]: E0224 16:30:55.147879 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:30:56 crc kubenswrapper[4982]: I0224 16:30:56.090238 4982 scope.go:117] "RemoveContainer" containerID="512a58df20830cc22cc2362c39e5bb833a9db65c151383ec9bef3340fac39a4d" Feb 24 16:30:56 crc kubenswrapper[4982]: I0224 16:30:56.191081 4982 scope.go:117] "RemoveContainer" containerID="92f7d16bb4a1ae6744131a7e4f32640fc3bc1fabe35211ac0e027d5742f82b5d" Feb 24 16:30:56 crc kubenswrapper[4982]: I0224 16:30:56.624622 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/util/0.log" Feb 24 16:30:56 crc kubenswrapper[4982]: I0224 16:30:56.759377 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/util/0.log" Feb 24 16:30:56 crc kubenswrapper[4982]: I0224 16:30:56.816655 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/pull/0.log" Feb 24 16:30:56 crc kubenswrapper[4982]: I0224 16:30:56.825936 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/pull/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.017565 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/util/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.023569 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/pull/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.023814 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/extract/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.205813 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/util/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.361275 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/util/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.412394 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/pull/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.429118 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/pull/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.578934 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/pull/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.582970 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/util/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.658459 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/extract/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.779728 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/util/0.log" Feb 24 16:30:57 crc kubenswrapper[4982]: I0224 16:30:57.986994 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/util/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.020812 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/pull/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.022662 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/pull/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.153012 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/util/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.190952 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/pull/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.232290 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/extract/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.330529 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-utilities/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.495111 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-content/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.651120 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-content/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.765337 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-utilities/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.800379 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-utilities/0.log" Feb 24 16:30:58 crc kubenswrapper[4982]: I0224 16:30:58.828735 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-content/0.log" Feb 24 16:30:59 crc kubenswrapper[4982]: I0224 16:30:59.065978 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-utilities/0.log" Feb 24 16:30:59 crc kubenswrapper[4982]: I0224 16:30:59.226344 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-utilities/0.log" Feb 24 16:30:59 crc kubenswrapper[4982]: I0224 16:30:59.794061 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/registry-server/0.log" Feb 24 16:30:59 crc kubenswrapper[4982]: I0224 16:30:59.919627 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-content/0.log" Feb 24 16:30:59 crc kubenswrapper[4982]: I0224 16:30:59.930757 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-content/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.135170 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-utilities/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.183512 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-content/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.369902 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/util/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.596033 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/pull/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.635428 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/util/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.654593 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/pull/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.919889 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/pull/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.922104 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/extract/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.923358 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/util/0.log" Feb 24 16:31:00 crc kubenswrapper[4982]: I0224 16:31:00.948910 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/registry-server/0.log" Feb 24 16:31:01 crc kubenswrapper[4982]: I0224 16:31:01.078870 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/util/0.log" Feb 24 16:31:01 crc kubenswrapper[4982]: I0224 16:31:01.963144 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/pull/0.log" Feb 24 16:31:01 crc kubenswrapper[4982]: I0224 16:31:01.972191 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/util/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.003320 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/pull/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.193783 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/util/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.210698 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/extract/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.211442 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/pull/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.217183 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-w8m2v_cfc42f28-cff7-46a9-a4cb-1421f8e7e61e/marketplace-operator/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.390181 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-utilities/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.556913 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-content/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.581710 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-utilities/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.593369 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-content/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.822100 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-content/0.log" Feb 24 16:31:02 crc kubenswrapper[4982]: I0224 16:31:02.837691 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-utilities/0.log" Feb 24 16:31:03 crc kubenswrapper[4982]: I0224 16:31:03.059484 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-utilities/0.log" Feb 24 16:31:03 crc kubenswrapper[4982]: I0224 16:31:03.230915 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-content/0.log" Feb 24 16:31:03 crc kubenswrapper[4982]: I0224 16:31:03.260070 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/registry-server/0.log" Feb 24 16:31:03 crc kubenswrapper[4982]: I0224 16:31:03.260639 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-utilities/0.log" Feb 24 16:31:03 crc kubenswrapper[4982]: I0224 16:31:03.278544 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-content/0.log" Feb 24 16:31:03 crc kubenswrapper[4982]: I0224 16:31:03.460891 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-utilities/0.log" Feb 24 16:31:03 crc kubenswrapper[4982]: I0224 16:31:03.479940 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-content/0.log" Feb 24 16:31:03 crc kubenswrapper[4982]: I0224 16:31:03.786765 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/registry-server/0.log" Feb 24 16:31:06 crc kubenswrapper[4982]: I0224 16:31:06.146627 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:31:06 crc kubenswrapper[4982]: E0224 16:31:06.147346 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:31:18 crc kubenswrapper[4982]: I0224 16:31:18.146109 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:31:18 crc kubenswrapper[4982]: E0224 16:31:18.148070 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:31:18 crc kubenswrapper[4982]: I0224 16:31:18.156737 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb_14ead058-d4ed-4e55-9632-a5e2f571b469/prometheus-operator-admission-webhook/0.log" Feb 24 16:31:18 crc kubenswrapper[4982]: I0224 16:31:18.161688 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-wjv8g_5d19a6f9-587e-42fc-8dd5-1a363bac4c09/prometheus-operator/0.log" Feb 24 16:31:18 crc kubenswrapper[4982]: I0224 16:31:18.170202 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc_ca2d2e72-fd73-4ad0-8d81-718235c7f891/prometheus-operator-admission-webhook/0.log" Feb 24 16:31:18 crc kubenswrapper[4982]: I0224 16:31:18.561581 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m6fxj_10e410a1-e886-451e-9cfc-40f6812a4d0d/observability-ui-dashboards/0.log" Feb 24 16:31:18 crc kubenswrapper[4982]: I0224 16:31:18.561901 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4hqxq_ef281ee3-4742-4dd3-947f-32c7f039f5ec/perses-operator/0.log" Feb 24 16:31:18 crc kubenswrapper[4982]: I0224 16:31:18.609207 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9bmsh_8186e569-67ca-4273-9de2-130ffd7dcf09/operator/0.log" Feb 24 16:31:33 crc kubenswrapper[4982]: I0224 16:31:33.146415 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:31:33 crc kubenswrapper[4982]: E0224 16:31:33.147384 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:31:34 crc kubenswrapper[4982]: I0224 16:31:34.475792 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4467cd99-kv4ps_9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c/kube-rbac-proxy/0.log" Feb 24 16:31:34 crc kubenswrapper[4982]: I0224 16:31:34.535883 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4467cd99-kv4ps_9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c/manager/0.log" Feb 24 16:31:45 crc kubenswrapper[4982]: E0224 16:31:45.639521 4982 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:48712->38.102.83.50:38677: write tcp 38.102.83.50:48712->38.102.83.50:38677: write: connection reset by peer Feb 24 16:31:47 crc kubenswrapper[4982]: I0224 16:31:47.145628 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:31:47 crc kubenswrapper[4982]: E0224 16:31:47.146343 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.207442 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532512-grt82"] Feb 24 16:32:00 crc kubenswrapper[4982]: E0224 16:32:00.208513 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerName="extract-utilities" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.208526 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerName="extract-utilities" Feb 24 16:32:00 crc kubenswrapper[4982]: E0224 16:32:00.208542 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerName="extract-content" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.208549 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerName="extract-content" Feb 24 16:32:00 crc kubenswrapper[4982]: E0224 16:32:00.208562 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerName="registry-server" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.208568 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerName="registry-server" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.208795 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e92941-5470-4fab-a165-7c55a0fa1a83" containerName="registry-server" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.209631 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532512-grt82" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.212401 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.214692 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.214700 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.219592 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532512-grt82"] Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.355931 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxjx\" (UniqueName: \"kubernetes.io/projected/c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1-kube-api-access-lfxjx\") pod \"auto-csr-approver-29532512-grt82\" (UID: \"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1\") " pod="openshift-infra/auto-csr-approver-29532512-grt82" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.458842 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfxjx\" (UniqueName: \"kubernetes.io/projected/c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1-kube-api-access-lfxjx\") pod \"auto-csr-approver-29532512-grt82\" (UID: \"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1\") " pod="openshift-infra/auto-csr-approver-29532512-grt82" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.496847 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfxjx\" (UniqueName: \"kubernetes.io/projected/c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1-kube-api-access-lfxjx\") pod \"auto-csr-approver-29532512-grt82\" (UID: \"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1\") " pod="openshift-infra/auto-csr-approver-29532512-grt82" Feb 24 16:32:00 crc kubenswrapper[4982]: I0224 16:32:00.535184 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532512-grt82" Feb 24 16:32:01 crc kubenswrapper[4982]: I0224 16:32:01.603953 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532512-grt82"] Feb 24 16:32:01 crc kubenswrapper[4982]: I0224 16:32:01.732222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532512-grt82" event={"ID":"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1","Type":"ContainerStarted","Data":"be57bc02cb460d4fd032ee614f0e03554349ab1b66026ac48357eed7c5e453bd"} Feb 24 16:32:02 crc kubenswrapper[4982]: I0224 16:32:02.145751 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:32:02 crc kubenswrapper[4982]: E0224 16:32:02.146315 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:32:03 crc kubenswrapper[4982]: I0224 16:32:03.756392 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532512-grt82" event={"ID":"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1","Type":"ContainerStarted","Data":"ac94e7cbac225dff8692c0ebc5b6e483e143bdb73e39331ee61cea50712da2c5"} Feb 24 16:32:03 crc kubenswrapper[4982]: I0224 16:32:03.785162 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532512-grt82" podStartSLOduration=2.928808277 podStartE2EDuration="3.785139272s" podCreationTimestamp="2026-02-24 16:32:00 +0000 UTC" firstStartedPulling="2026-02-24 16:32:01.625304212 +0000 UTC m=+6183.244362705" lastFinishedPulling="2026-02-24 16:32:02.481635197 +0000 UTC m=+6184.100693700" observedRunningTime="2026-02-24 16:32:03.773794802 +0000 UTC m=+6185.392853295" watchObservedRunningTime="2026-02-24 16:32:03.785139272 +0000 UTC m=+6185.404197755" Feb 24 16:32:04 crc kubenswrapper[4982]: I0224 16:32:04.770715 4982 generic.go:334] "Generic (PLEG): container finished" podID="c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1" containerID="ac94e7cbac225dff8692c0ebc5b6e483e143bdb73e39331ee61cea50712da2c5" exitCode=0 Feb 24 16:32:04 crc kubenswrapper[4982]: I0224 16:32:04.770778 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532512-grt82" event={"ID":"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1","Type":"ContainerDied","Data":"ac94e7cbac225dff8692c0ebc5b6e483e143bdb73e39331ee61cea50712da2c5"} Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.251812 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532512-grt82" Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.406607 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfxjx\" (UniqueName: \"kubernetes.io/projected/c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1-kube-api-access-lfxjx\") pod \"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1\" (UID: \"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1\") " Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.413573 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1-kube-api-access-lfxjx" (OuterVolumeSpecName: "kube-api-access-lfxjx") pod "c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1" (UID: "c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1"). InnerVolumeSpecName "kube-api-access-lfxjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.508932 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfxjx\" (UniqueName: \"kubernetes.io/projected/c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1-kube-api-access-lfxjx\") on node \"crc\" DevicePath \"\"" Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.794843 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532512-grt82" event={"ID":"c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1","Type":"ContainerDied","Data":"be57bc02cb460d4fd032ee614f0e03554349ab1b66026ac48357eed7c5e453bd"} Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.794886 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be57bc02cb460d4fd032ee614f0e03554349ab1b66026ac48357eed7c5e453bd" Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.794945 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532512-grt82" Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.853408 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532506-xzf46"] Feb 24 16:32:06 crc kubenswrapper[4982]: I0224 16:32:06.864771 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532506-xzf46"] Feb 24 16:32:07 crc kubenswrapper[4982]: I0224 16:32:07.162648 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82beadda-27e6-497a-8271-54e32586942c" path="/var/lib/kubelet/pods/82beadda-27e6-497a-8271-54e32586942c/volumes" Feb 24 16:32:15 crc kubenswrapper[4982]: I0224 16:32:15.147943 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:32:15 crc kubenswrapper[4982]: E0224 16:32:15.149229 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:32:27 crc kubenswrapper[4982]: I0224 16:32:27.148369 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:32:27 crc kubenswrapper[4982]: E0224 16:32:27.149393 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:32:42 crc kubenswrapper[4982]: I0224 16:32:42.145860 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:32:42 crc kubenswrapper[4982]: E0224 16:32:42.147042 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:32:56 crc kubenswrapper[4982]: I0224 16:32:56.494541 4982 scope.go:117] "RemoveContainer" containerID="0c4eec9c2d0e77d103c55002b5f4a9947a1896d4a975e45a00837fc4e50aaf66" Feb 24 16:32:56 crc kubenswrapper[4982]: I0224 16:32:56.526208 4982 scope.go:117] "RemoveContainer" containerID="448942b858e9325697d1d3c7a22ef51a96a486af67db959b9e95a723446a0d3a" Feb 24 16:32:57 crc kubenswrapper[4982]: I0224 16:32:57.146826 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:32:57 crc kubenswrapper[4982]: E0224 16:32:57.147407 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:33:11 crc kubenswrapper[4982]: I0224 16:33:11.146993 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:33:11 crc kubenswrapper[4982]: E0224 16:33:11.148268 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.564661 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6lhmb"] Feb 24 16:33:15 crc kubenswrapper[4982]: E0224 16:33:15.565707 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1" containerName="oc" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.565721 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1" containerName="oc" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.565972 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1" containerName="oc" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.567781 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.586693 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6lhmb"] Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.682448 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxjt\" (UniqueName: \"kubernetes.io/projected/0f5e900d-ea8f-41b8-8d00-cdd421911437-kube-api-access-7cxjt\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.682498 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-catalog-content\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.682828 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-utilities\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.784574 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-utilities\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.784741 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cxjt\" (UniqueName: \"kubernetes.io/projected/0f5e900d-ea8f-41b8-8d00-cdd421911437-kube-api-access-7cxjt\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.784769 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-catalog-content\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.785128 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-utilities\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.785213 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-catalog-content\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.807002 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cxjt\" (UniqueName: \"kubernetes.io/projected/0f5e900d-ea8f-41b8-8d00-cdd421911437-kube-api-access-7cxjt\") pod \"certified-operators-6lhmb\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:15 crc kubenswrapper[4982]: I0224 16:33:15.900332 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:17 crc kubenswrapper[4982]: I0224 16:33:17.367638 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6lhmb"] Feb 24 16:33:17 crc kubenswrapper[4982]: I0224 16:33:17.709570 4982 generic.go:334] "Generic (PLEG): container finished" podID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerID="f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222" exitCode=0 Feb 24 16:33:17 crc kubenswrapper[4982]: I0224 16:33:17.709746 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lhmb" event={"ID":"0f5e900d-ea8f-41b8-8d00-cdd421911437","Type":"ContainerDied","Data":"f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222"} Feb 24 16:33:17 crc kubenswrapper[4982]: I0224 16:33:17.710000 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lhmb" event={"ID":"0f5e900d-ea8f-41b8-8d00-cdd421911437","Type":"ContainerStarted","Data":"262e69baf65cca49bcd76ffa40a65120668763e3203ee9963b9e3ec45d29f41b"} Feb 24 16:33:18 crc kubenswrapper[4982]: I0224 16:33:18.723289 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lhmb" event={"ID":"0f5e900d-ea8f-41b8-8d00-cdd421911437","Type":"ContainerStarted","Data":"18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf"} Feb 24 16:33:20 crc kubenswrapper[4982]: I0224 16:33:20.749188 4982 generic.go:334] "Generic (PLEG): container finished" podID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerID="18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf" exitCode=0 Feb 24 16:33:20 crc kubenswrapper[4982]: I0224 16:33:20.749274 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lhmb" event={"ID":"0f5e900d-ea8f-41b8-8d00-cdd421911437","Type":"ContainerDied","Data":"18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf"} Feb 24 16:33:21 crc kubenswrapper[4982]: I0224 16:33:21.762164 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lhmb" event={"ID":"0f5e900d-ea8f-41b8-8d00-cdd421911437","Type":"ContainerStarted","Data":"fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143"} Feb 24 16:33:21 crc kubenswrapper[4982]: I0224 16:33:21.798050 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6lhmb" podStartSLOduration=3.334302442 podStartE2EDuration="6.798028586s" podCreationTimestamp="2026-02-24 16:33:15 +0000 UTC" firstStartedPulling="2026-02-24 16:33:17.71169134 +0000 UTC m=+6259.330749843" lastFinishedPulling="2026-02-24 16:33:21.175417484 +0000 UTC m=+6262.794475987" observedRunningTime="2026-02-24 16:33:21.790086601 +0000 UTC m=+6263.409145094" watchObservedRunningTime="2026-02-24 16:33:21.798028586 +0000 UTC m=+6263.417087069" Feb 24 16:33:24 crc kubenswrapper[4982]: I0224 16:33:24.146046 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:33:24 crc kubenswrapper[4982]: E0224 16:33:24.146776 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:33:25 crc kubenswrapper[4982]: I0224 16:33:25.900720 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:25 crc kubenswrapper[4982]: I0224 16:33:25.901124 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:26 crc kubenswrapper[4982]: I0224 16:33:26.972593 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6lhmb" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="registry-server" probeResult="failure" output=< Feb 24 16:33:26 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:33:26 crc kubenswrapper[4982]: > Feb 24 16:33:35 crc kubenswrapper[4982]: I0224 16:33:35.146170 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:33:35 crc kubenswrapper[4982]: E0224 16:33:35.147037 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:33:35 crc kubenswrapper[4982]: I0224 16:33:35.988864 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:36 crc kubenswrapper[4982]: I0224 16:33:36.087000 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:36 crc kubenswrapper[4982]: I0224 16:33:36.846705 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6lhmb"] Feb 24 16:33:37 crc kubenswrapper[4982]: I0224 16:33:37.979162 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6lhmb" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="registry-server" containerID="cri-o://fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143" gracePeriod=2 Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.627020 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.685963 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-catalog-content\") pod \"0f5e900d-ea8f-41b8-8d00-cdd421911437\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.686337 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-utilities\") pod \"0f5e900d-ea8f-41b8-8d00-cdd421911437\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.686387 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cxjt\" (UniqueName: \"kubernetes.io/projected/0f5e900d-ea8f-41b8-8d00-cdd421911437-kube-api-access-7cxjt\") pod \"0f5e900d-ea8f-41b8-8d00-cdd421911437\" (UID: \"0f5e900d-ea8f-41b8-8d00-cdd421911437\") " Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.687419 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-utilities" (OuterVolumeSpecName: "utilities") pod "0f5e900d-ea8f-41b8-8d00-cdd421911437" (UID: "0f5e900d-ea8f-41b8-8d00-cdd421911437"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.696763 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5e900d-ea8f-41b8-8d00-cdd421911437-kube-api-access-7cxjt" (OuterVolumeSpecName: "kube-api-access-7cxjt") pod "0f5e900d-ea8f-41b8-8d00-cdd421911437" (UID: "0f5e900d-ea8f-41b8-8d00-cdd421911437"). InnerVolumeSpecName "kube-api-access-7cxjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.751701 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f5e900d-ea8f-41b8-8d00-cdd421911437" (UID: "0f5e900d-ea8f-41b8-8d00-cdd421911437"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.790313 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.790357 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f5e900d-ea8f-41b8-8d00-cdd421911437-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.790366 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cxjt\" (UniqueName: \"kubernetes.io/projected/0f5e900d-ea8f-41b8-8d00-cdd421911437-kube-api-access-7cxjt\") on node \"crc\" DevicePath \"\"" Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.996072 4982 generic.go:334] "Generic (PLEG): container finished" podID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerID="fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143" exitCode=0 Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.996130 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lhmb" event={"ID":"0f5e900d-ea8f-41b8-8d00-cdd421911437","Type":"ContainerDied","Data":"fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143"} Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.996168 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lhmb" event={"ID":"0f5e900d-ea8f-41b8-8d00-cdd421911437","Type":"ContainerDied","Data":"262e69baf65cca49bcd76ffa40a65120668763e3203ee9963b9e3ec45d29f41b"} Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.996196 4982 scope.go:117] "RemoveContainer" containerID="fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143" Feb 24 16:33:38 crc kubenswrapper[4982]: I0224 16:33:38.996209 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6lhmb" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.050163 4982 scope.go:117] "RemoveContainer" containerID="18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.057305 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6lhmb"] Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.066470 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6lhmb"] Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.075484 4982 scope.go:117] "RemoveContainer" containerID="f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.138696 4982 scope.go:117] "RemoveContainer" containerID="fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143" Feb 24 16:33:39 crc kubenswrapper[4982]: E0224 16:33:39.139057 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143\": container with ID starting with fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143 not found: ID does not exist" containerID="fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.139092 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143"} err="failed to get container status \"fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143\": rpc error: code = NotFound desc = could not find container \"fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143\": container with ID starting with fb5d62c8e03ca2034677774bc48a57fffda89ef31bb57a389111537702b5f143 not found: ID does not exist" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.139114 4982 scope.go:117] "RemoveContainer" containerID="18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf" Feb 24 16:33:39 crc kubenswrapper[4982]: E0224 16:33:39.139387 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf\": container with ID starting with 18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf not found: ID does not exist" containerID="18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.139452 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf"} err="failed to get container status \"18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf\": rpc error: code = NotFound desc = could not find container \"18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf\": container with ID starting with 18dfccc5a3e8d78bb70aeaf091763b0a894edd0db7ce80d136469078aa10debf not found: ID does not exist" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.139517 4982 scope.go:117] "RemoveContainer" containerID="f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222" Feb 24 16:33:39 crc kubenswrapper[4982]: E0224 16:33:39.140021 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222\": container with ID starting with f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222 not found: ID does not exist" containerID="f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.140059 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222"} err="failed to get container status \"f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222\": rpc error: code = NotFound desc = could not find container \"f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222\": container with ID starting with f98c1779e5ee96aebc6b2e826344d938a967c5e3020a6fcd1436668ba6860222 not found: ID does not exist" Feb 24 16:33:39 crc kubenswrapper[4982]: I0224 16:33:39.160973 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" path="/var/lib/kubelet/pods/0f5e900d-ea8f-41b8-8d00-cdd421911437/volumes" Feb 24 16:33:41 crc kubenswrapper[4982]: I0224 16:33:41.043319 4982 generic.go:334] "Generic (PLEG): container finished" podID="84079553-029b-4043-a50c-b7ddd543dadb" containerID="23920e794895d9d49785bee59bb4875d0f0ef05c9e5d420d706a5e403b94d9e2" exitCode=0 Feb 24 16:33:41 crc kubenswrapper[4982]: I0224 16:33:41.043396 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nsv2s/must-gather-7n947" event={"ID":"84079553-029b-4043-a50c-b7ddd543dadb","Type":"ContainerDied","Data":"23920e794895d9d49785bee59bb4875d0f0ef05c9e5d420d706a5e403b94d9e2"} Feb 24 16:33:41 crc kubenswrapper[4982]: I0224 16:33:41.044619 4982 scope.go:117] "RemoveContainer" containerID="23920e794895d9d49785bee59bb4875d0f0ef05c9e5d420d706a5e403b94d9e2" Feb 24 16:33:41 crc kubenswrapper[4982]: I0224 16:33:41.472452 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nsv2s_must-gather-7n947_84079553-029b-4043-a50c-b7ddd543dadb/gather/0.log" Feb 24 16:33:46 crc kubenswrapper[4982]: I0224 16:33:46.146262 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:33:46 crc kubenswrapper[4982]: E0224 16:33:46.147597 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:33:49 crc kubenswrapper[4982]: I0224 16:33:49.941868 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nsv2s/must-gather-7n947"] Feb 24 16:33:49 crc kubenswrapper[4982]: I0224 16:33:49.942686 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-nsv2s/must-gather-7n947" podUID="84079553-029b-4043-a50c-b7ddd543dadb" containerName="copy" containerID="cri-o://1fe256fa2716cf1b19b676369e86a29c69e9e687402238ce67ca03e0a3e4040e" gracePeriod=2 Feb 24 16:33:49 crc kubenswrapper[4982]: I0224 16:33:49.959797 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nsv2s/must-gather-7n947"] Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.180013 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nsv2s_must-gather-7n947_84079553-029b-4043-a50c-b7ddd543dadb/copy/0.log" Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.181087 4982 generic.go:334] "Generic (PLEG): container finished" podID="84079553-029b-4043-a50c-b7ddd543dadb" containerID="1fe256fa2716cf1b19b676369e86a29c69e9e687402238ce67ca03e0a3e4040e" exitCode=143 Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.499770 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nsv2s_must-gather-7n947_84079553-029b-4043-a50c-b7ddd543dadb/copy/0.log" Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.500792 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.640130 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z97pz\" (UniqueName: \"kubernetes.io/projected/84079553-029b-4043-a50c-b7ddd543dadb-kube-api-access-z97pz\") pod \"84079553-029b-4043-a50c-b7ddd543dadb\" (UID: \"84079553-029b-4043-a50c-b7ddd543dadb\") " Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.640240 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84079553-029b-4043-a50c-b7ddd543dadb-must-gather-output\") pod \"84079553-029b-4043-a50c-b7ddd543dadb\" (UID: \"84079553-029b-4043-a50c-b7ddd543dadb\") " Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.645945 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84079553-029b-4043-a50c-b7ddd543dadb-kube-api-access-z97pz" (OuterVolumeSpecName: "kube-api-access-z97pz") pod "84079553-029b-4043-a50c-b7ddd543dadb" (UID: "84079553-029b-4043-a50c-b7ddd543dadb"). InnerVolumeSpecName "kube-api-access-z97pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.743678 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z97pz\" (UniqueName: \"kubernetes.io/projected/84079553-029b-4043-a50c-b7ddd543dadb-kube-api-access-z97pz\") on node \"crc\" DevicePath \"\"" Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.863896 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84079553-029b-4043-a50c-b7ddd543dadb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "84079553-029b-4043-a50c-b7ddd543dadb" (UID: "84079553-029b-4043-a50c-b7ddd543dadb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:33:50 crc kubenswrapper[4982]: I0224 16:33:50.948849 4982 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/84079553-029b-4043-a50c-b7ddd543dadb-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 24 16:33:51 crc kubenswrapper[4982]: I0224 16:33:51.166547 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84079553-029b-4043-a50c-b7ddd543dadb" path="/var/lib/kubelet/pods/84079553-029b-4043-a50c-b7ddd543dadb/volumes" Feb 24 16:33:51 crc kubenswrapper[4982]: I0224 16:33:51.194028 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nsv2s_must-gather-7n947_84079553-029b-4043-a50c-b7ddd543dadb/copy/0.log" Feb 24 16:33:51 crc kubenswrapper[4982]: I0224 16:33:51.194565 4982 scope.go:117] "RemoveContainer" containerID="1fe256fa2716cf1b19b676369e86a29c69e9e687402238ce67ca03e0a3e4040e" Feb 24 16:33:51 crc kubenswrapper[4982]: I0224 16:33:51.194707 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nsv2s/must-gather-7n947" Feb 24 16:33:51 crc kubenswrapper[4982]: I0224 16:33:51.226094 4982 scope.go:117] "RemoveContainer" containerID="23920e794895d9d49785bee59bb4875d0f0ef05c9e5d420d706a5e403b94d9e2" Feb 24 16:33:56 crc kubenswrapper[4982]: I0224 16:33:56.749260 4982 scope.go:117] "RemoveContainer" containerID="c619d190f1b3121f09aa0fad0f669015c23b44331a50435fa545378ea9b93e3d" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.145668 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:34:00 crc kubenswrapper[4982]: E0224 16:34:00.146356 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.152306 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532514-mpjxp"] Feb 24 16:34:00 crc kubenswrapper[4982]: E0224 16:34:00.152789 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84079553-029b-4043-a50c-b7ddd543dadb" containerName="copy" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.152805 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="84079553-029b-4043-a50c-b7ddd543dadb" containerName="copy" Feb 24 16:34:00 crc kubenswrapper[4982]: E0224 16:34:00.152829 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="registry-server" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.152836 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="registry-server" Feb 24 16:34:00 crc kubenswrapper[4982]: E0224 16:34:00.152855 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="extract-content" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.152860 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="extract-content" Feb 24 16:34:00 crc kubenswrapper[4982]: E0224 16:34:00.152889 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84079553-029b-4043-a50c-b7ddd543dadb" containerName="gather" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.152895 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="84079553-029b-4043-a50c-b7ddd543dadb" containerName="gather" Feb 24 16:34:00 crc kubenswrapper[4982]: E0224 16:34:00.152909 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="extract-utilities" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.152915 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="extract-utilities" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.153097 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5e900d-ea8f-41b8-8d00-cdd421911437" containerName="registry-server" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.153119 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="84079553-029b-4043-a50c-b7ddd543dadb" containerName="copy" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.153137 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="84079553-029b-4043-a50c-b7ddd543dadb" containerName="gather" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.153877 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532514-mpjxp" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.156637 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.157196 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.157953 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.182798 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532514-mpjxp"] Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.199437 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgch6\" (UniqueName: \"kubernetes.io/projected/9889e25d-5027-4f9b-b4b9-eeffa0fac2af-kube-api-access-wgch6\") pod \"auto-csr-approver-29532514-mpjxp\" (UID: \"9889e25d-5027-4f9b-b4b9-eeffa0fac2af\") " pod="openshift-infra/auto-csr-approver-29532514-mpjxp" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.306010 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgch6\" (UniqueName: \"kubernetes.io/projected/9889e25d-5027-4f9b-b4b9-eeffa0fac2af-kube-api-access-wgch6\") pod \"auto-csr-approver-29532514-mpjxp\" (UID: \"9889e25d-5027-4f9b-b4b9-eeffa0fac2af\") " pod="openshift-infra/auto-csr-approver-29532514-mpjxp" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.323438 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgch6\" (UniqueName: \"kubernetes.io/projected/9889e25d-5027-4f9b-b4b9-eeffa0fac2af-kube-api-access-wgch6\") pod \"auto-csr-approver-29532514-mpjxp\" (UID: \"9889e25d-5027-4f9b-b4b9-eeffa0fac2af\") " pod="openshift-infra/auto-csr-approver-29532514-mpjxp" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.478984 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532514-mpjxp" Feb 24 16:34:00 crc kubenswrapper[4982]: I0224 16:34:00.975589 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532514-mpjxp"] Feb 24 16:34:01 crc kubenswrapper[4982]: I0224 16:34:01.324080 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532514-mpjxp" event={"ID":"9889e25d-5027-4f9b-b4b9-eeffa0fac2af","Type":"ContainerStarted","Data":"1f8d4af212e5c292ac93f52b1f06c94aec149553bbe1f0b427ee761ebcda79c1"} Feb 24 16:34:03 crc kubenswrapper[4982]: I0224 16:34:03.351010 4982 generic.go:334] "Generic (PLEG): container finished" podID="9889e25d-5027-4f9b-b4b9-eeffa0fac2af" containerID="f67e44eb86d825d92abf1432022e2c604100684c6442694c43750b4574df14c8" exitCode=0 Feb 24 16:34:03 crc kubenswrapper[4982]: I0224 16:34:03.351665 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532514-mpjxp" event={"ID":"9889e25d-5027-4f9b-b4b9-eeffa0fac2af","Type":"ContainerDied","Data":"f67e44eb86d825d92abf1432022e2c604100684c6442694c43750b4574df14c8"} Feb 24 16:34:04 crc kubenswrapper[4982]: I0224 16:34:04.785083 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532514-mpjxp" Feb 24 16:34:04 crc kubenswrapper[4982]: I0224 16:34:04.930403 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgch6\" (UniqueName: \"kubernetes.io/projected/9889e25d-5027-4f9b-b4b9-eeffa0fac2af-kube-api-access-wgch6\") pod \"9889e25d-5027-4f9b-b4b9-eeffa0fac2af\" (UID: \"9889e25d-5027-4f9b-b4b9-eeffa0fac2af\") " Feb 24 16:34:04 crc kubenswrapper[4982]: I0224 16:34:04.936462 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9889e25d-5027-4f9b-b4b9-eeffa0fac2af-kube-api-access-wgch6" (OuterVolumeSpecName: "kube-api-access-wgch6") pod "9889e25d-5027-4f9b-b4b9-eeffa0fac2af" (UID: "9889e25d-5027-4f9b-b4b9-eeffa0fac2af"). InnerVolumeSpecName "kube-api-access-wgch6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:34:05 crc kubenswrapper[4982]: I0224 16:34:05.034254 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgch6\" (UniqueName: \"kubernetes.io/projected/9889e25d-5027-4f9b-b4b9-eeffa0fac2af-kube-api-access-wgch6\") on node \"crc\" DevicePath \"\"" Feb 24 16:34:05 crc kubenswrapper[4982]: I0224 16:34:05.376740 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532514-mpjxp" event={"ID":"9889e25d-5027-4f9b-b4b9-eeffa0fac2af","Type":"ContainerDied","Data":"1f8d4af212e5c292ac93f52b1f06c94aec149553bbe1f0b427ee761ebcda79c1"} Feb 24 16:34:05 crc kubenswrapper[4982]: I0224 16:34:05.376774 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f8d4af212e5c292ac93f52b1f06c94aec149553bbe1f0b427ee761ebcda79c1" Feb 24 16:34:05 crc kubenswrapper[4982]: I0224 16:34:05.376791 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532514-mpjxp" Feb 24 16:34:05 crc kubenswrapper[4982]: I0224 16:34:05.864165 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532508-2654s"] Feb 24 16:34:05 crc kubenswrapper[4982]: I0224 16:34:05.877299 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532508-2654s"] Feb 24 16:34:07 crc kubenswrapper[4982]: I0224 16:34:07.169461 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898" path="/var/lib/kubelet/pods/0e1dc5e3-a4e4-487a-b0e9-3fc7800aa898/volumes" Feb 24 16:34:13 crc kubenswrapper[4982]: I0224 16:34:13.146607 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:34:13 crc kubenswrapper[4982]: E0224 16:34:13.147464 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:34:26 crc kubenswrapper[4982]: I0224 16:34:26.145957 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:34:26 crc kubenswrapper[4982]: E0224 16:34:26.148174 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:34:36 crc kubenswrapper[4982]: I0224 16:34:36.103274 4982 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5kccw container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded" start-of-body= Feb 24 16:34:36 crc kubenswrapper[4982]: I0224 16:34:36.103840 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5kccw" podUID="7d1e5581-e8dc-40ee-8af4-d8f5f2a70fa6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded" Feb 24 16:34:36 crc kubenswrapper[4982]: E0224 16:34:36.183950 4982 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.039s" Feb 24 16:34:39 crc kubenswrapper[4982]: I0224 16:34:39.161423 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:34:39 crc kubenswrapper[4982]: E0224 16:34:39.162870 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:34:53 crc kubenswrapper[4982]: I0224 16:34:53.147225 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:34:53 crc kubenswrapper[4982]: E0224 16:34:53.148439 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:34:56 crc kubenswrapper[4982]: I0224 16:34:56.892151 4982 scope.go:117] "RemoveContainer" containerID="06df4022a010a9b0a008a60620a14250e5bd337d7c06d1652f1e410c2e7b4d3d" Feb 24 16:35:05 crc kubenswrapper[4982]: I0224 16:35:05.146124 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:35:05 crc kubenswrapper[4982]: E0224 16:35:05.147011 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:35:16 crc kubenswrapper[4982]: I0224 16:35:16.146291 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:35:16 crc kubenswrapper[4982]: I0224 16:35:16.687941 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"8ace8224ab7a5a8617509839dcd2d4d3ee3d53ce07edc0028d24bce3f1f97c1b"} Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.176418 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532516-j999s"] Feb 24 16:36:00 crc kubenswrapper[4982]: E0224 16:36:00.177778 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9889e25d-5027-4f9b-b4b9-eeffa0fac2af" containerName="oc" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.177801 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9889e25d-5027-4f9b-b4b9-eeffa0fac2af" containerName="oc" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.178290 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9889e25d-5027-4f9b-b4b9-eeffa0fac2af" containerName="oc" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.179756 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532516-j999s" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.182900 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.183232 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.183400 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.192583 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532516-j999s"] Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.275842 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdf6\" (UniqueName: \"kubernetes.io/projected/03fa432e-9a3c-4f3b-9314-71aba5302e06-kube-api-access-sxdf6\") pod \"auto-csr-approver-29532516-j999s\" (UID: \"03fa432e-9a3c-4f3b-9314-71aba5302e06\") " pod="openshift-infra/auto-csr-approver-29532516-j999s" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.379095 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdf6\" (UniqueName: \"kubernetes.io/projected/03fa432e-9a3c-4f3b-9314-71aba5302e06-kube-api-access-sxdf6\") pod \"auto-csr-approver-29532516-j999s\" (UID: \"03fa432e-9a3c-4f3b-9314-71aba5302e06\") " pod="openshift-infra/auto-csr-approver-29532516-j999s" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.404833 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdf6\" (UniqueName: \"kubernetes.io/projected/03fa432e-9a3c-4f3b-9314-71aba5302e06-kube-api-access-sxdf6\") pod \"auto-csr-approver-29532516-j999s\" (UID: \"03fa432e-9a3c-4f3b-9314-71aba5302e06\") " pod="openshift-infra/auto-csr-approver-29532516-j999s" Feb 24 16:36:00 crc kubenswrapper[4982]: I0224 16:36:00.508672 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532516-j999s" Feb 24 16:36:01 crc kubenswrapper[4982]: I0224 16:36:01.043882 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532516-j999s"] Feb 24 16:36:01 crc kubenswrapper[4982]: I0224 16:36:01.057089 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:36:01 crc kubenswrapper[4982]: I0224 16:36:01.292161 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532516-j999s" event={"ID":"03fa432e-9a3c-4f3b-9314-71aba5302e06","Type":"ContainerStarted","Data":"3ff93a7e8941c6ddf915595a525c75860b31c579c4808bf1d73224f755bbd64e"} Feb 24 16:36:03 crc kubenswrapper[4982]: I0224 16:36:03.321151 4982 generic.go:334] "Generic (PLEG): container finished" podID="03fa432e-9a3c-4f3b-9314-71aba5302e06" containerID="c9bceda3adc7db40727ff2026d2de8ea84b5ec97bcef632446379c31faccb358" exitCode=0 Feb 24 16:36:03 crc kubenswrapper[4982]: I0224 16:36:03.321836 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532516-j999s" event={"ID":"03fa432e-9a3c-4f3b-9314-71aba5302e06","Type":"ContainerDied","Data":"c9bceda3adc7db40727ff2026d2de8ea84b5ec97bcef632446379c31faccb358"} Feb 24 16:36:04 crc kubenswrapper[4982]: I0224 16:36:04.809216 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532516-j999s" Feb 24 16:36:04 crc kubenswrapper[4982]: I0224 16:36:04.933139 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdf6\" (UniqueName: \"kubernetes.io/projected/03fa432e-9a3c-4f3b-9314-71aba5302e06-kube-api-access-sxdf6\") pod \"03fa432e-9a3c-4f3b-9314-71aba5302e06\" (UID: \"03fa432e-9a3c-4f3b-9314-71aba5302e06\") " Feb 24 16:36:04 crc kubenswrapper[4982]: I0224 16:36:04.939177 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03fa432e-9a3c-4f3b-9314-71aba5302e06-kube-api-access-sxdf6" (OuterVolumeSpecName: "kube-api-access-sxdf6") pod "03fa432e-9a3c-4f3b-9314-71aba5302e06" (UID: "03fa432e-9a3c-4f3b-9314-71aba5302e06"). InnerVolumeSpecName "kube-api-access-sxdf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:36:05 crc kubenswrapper[4982]: I0224 16:36:05.037716 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxdf6\" (UniqueName: \"kubernetes.io/projected/03fa432e-9a3c-4f3b-9314-71aba5302e06-kube-api-access-sxdf6\") on node \"crc\" DevicePath \"\"" Feb 24 16:36:05 crc kubenswrapper[4982]: I0224 16:36:05.362943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532516-j999s" event={"ID":"03fa432e-9a3c-4f3b-9314-71aba5302e06","Type":"ContainerDied","Data":"3ff93a7e8941c6ddf915595a525c75860b31c579c4808bf1d73224f755bbd64e"} Feb 24 16:36:05 crc kubenswrapper[4982]: I0224 16:36:05.363008 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ff93a7e8941c6ddf915595a525c75860b31c579c4808bf1d73224f755bbd64e" Feb 24 16:36:05 crc kubenswrapper[4982]: I0224 16:36:05.363098 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532516-j999s" Feb 24 16:36:05 crc kubenswrapper[4982]: I0224 16:36:05.912855 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532510-wlx6h"] Feb 24 16:36:05 crc kubenswrapper[4982]: I0224 16:36:05.937858 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532510-wlx6h"] Feb 24 16:36:07 crc kubenswrapper[4982]: I0224 16:36:07.169164 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f297b1-0ddb-4703-abb9-329c0dd5863d" path="/var/lib/kubelet/pods/02f297b1-0ddb-4703-abb9-329c0dd5863d/volumes" Feb 24 16:36:57 crc kubenswrapper[4982]: I0224 16:36:57.064638 4982 scope.go:117] "RemoveContainer" containerID="864940e75182aeeadcd3ce7c93790f7618796fbef5f794f3edf7b68d8b619d99" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.895523 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bf2x/must-gather-4tsr9"] Feb 24 16:37:09 crc kubenswrapper[4982]: E0224 16:37:09.897270 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03fa432e-9a3c-4f3b-9314-71aba5302e06" containerName="oc" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.897345 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="03fa432e-9a3c-4f3b-9314-71aba5302e06" containerName="oc" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.897645 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="03fa432e-9a3c-4f3b-9314-71aba5302e06" containerName="oc" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.898899 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.909623 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6bf2x"/"openshift-service-ca.crt" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.922305 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6bf2x"/"kube-root-ca.crt" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.933485 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5zf\" (UniqueName: \"kubernetes.io/projected/26ecbcc1-7cd8-453b-96ba-6fea013ae275-kube-api-access-8f5zf\") pod \"must-gather-4tsr9\" (UID: \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\") " pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.933668 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/26ecbcc1-7cd8-453b-96ba-6fea013ae275-must-gather-output\") pod \"must-gather-4tsr9\" (UID: \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\") " pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:37:09 crc kubenswrapper[4982]: I0224 16:37:09.948721 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6bf2x/must-gather-4tsr9"] Feb 24 16:37:10 crc kubenswrapper[4982]: I0224 16:37:10.036754 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5zf\" (UniqueName: \"kubernetes.io/projected/26ecbcc1-7cd8-453b-96ba-6fea013ae275-kube-api-access-8f5zf\") pod \"must-gather-4tsr9\" (UID: \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\") " pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:37:10 crc kubenswrapper[4982]: I0224 16:37:10.036837 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/26ecbcc1-7cd8-453b-96ba-6fea013ae275-must-gather-output\") pod \"must-gather-4tsr9\" (UID: \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\") " pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:37:10 crc kubenswrapper[4982]: I0224 16:37:10.038100 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/26ecbcc1-7cd8-453b-96ba-6fea013ae275-must-gather-output\") pod \"must-gather-4tsr9\" (UID: \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\") " pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:37:10 crc kubenswrapper[4982]: I0224 16:37:10.080640 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5zf\" (UniqueName: \"kubernetes.io/projected/26ecbcc1-7cd8-453b-96ba-6fea013ae275-kube-api-access-8f5zf\") pod \"must-gather-4tsr9\" (UID: \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\") " pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:37:10 crc kubenswrapper[4982]: I0224 16:37:10.218971 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:37:10 crc kubenswrapper[4982]: I0224 16:37:10.796891 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6bf2x/must-gather-4tsr9"] Feb 24 16:37:11 crc kubenswrapper[4982]: I0224 16:37:11.277468 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" event={"ID":"26ecbcc1-7cd8-453b-96ba-6fea013ae275","Type":"ContainerStarted","Data":"6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638"} Feb 24 16:37:11 crc kubenswrapper[4982]: I0224 16:37:11.277787 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" event={"ID":"26ecbcc1-7cd8-453b-96ba-6fea013ae275","Type":"ContainerStarted","Data":"58c02c8fef522ac81edc8b96d8b98d4847fb0d5dfa7b531b7bd4567551d52d0d"} Feb 24 16:37:12 crc kubenswrapper[4982]: I0224 16:37:12.302571 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" event={"ID":"26ecbcc1-7cd8-453b-96ba-6fea013ae275","Type":"ContainerStarted","Data":"5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb"} Feb 24 16:37:12 crc kubenswrapper[4982]: I0224 16:37:12.321765 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" podStartSLOduration=3.321747899 podStartE2EDuration="3.321747899s" podCreationTimestamp="2026-02-24 16:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 16:37:12.318339437 +0000 UTC m=+6493.937397970" watchObservedRunningTime="2026-02-24 16:37:12.321747899 +0000 UTC m=+6493.940806392" Feb 24 16:37:16 crc kubenswrapper[4982]: I0224 16:37:16.769018 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-trkd2"] Feb 24 16:37:16 crc kubenswrapper[4982]: I0224 16:37:16.788622 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:37:16 crc kubenswrapper[4982]: I0224 16:37:16.793391 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6bf2x"/"default-dockercfg-6jllv" Feb 24 16:37:16 crc kubenswrapper[4982]: I0224 16:37:16.899258 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l6mp\" (UniqueName: \"kubernetes.io/projected/0dddc606-a731-4220-959e-cee8dfb90a41-kube-api-access-5l6mp\") pod \"crc-debug-trkd2\" (UID: \"0dddc606-a731-4220-959e-cee8dfb90a41\") " pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:37:16 crc kubenswrapper[4982]: I0224 16:37:16.899356 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dddc606-a731-4220-959e-cee8dfb90a41-host\") pod \"crc-debug-trkd2\" (UID: \"0dddc606-a731-4220-959e-cee8dfb90a41\") " pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:37:17 crc kubenswrapper[4982]: I0224 16:37:17.001317 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l6mp\" (UniqueName: \"kubernetes.io/projected/0dddc606-a731-4220-959e-cee8dfb90a41-kube-api-access-5l6mp\") pod \"crc-debug-trkd2\" (UID: \"0dddc606-a731-4220-959e-cee8dfb90a41\") " pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:37:17 crc kubenswrapper[4982]: I0224 16:37:17.001394 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dddc606-a731-4220-959e-cee8dfb90a41-host\") pod \"crc-debug-trkd2\" (UID: \"0dddc606-a731-4220-959e-cee8dfb90a41\") " pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:37:17 crc kubenswrapper[4982]: I0224 16:37:17.011152 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dddc606-a731-4220-959e-cee8dfb90a41-host\") pod \"crc-debug-trkd2\" (UID: \"0dddc606-a731-4220-959e-cee8dfb90a41\") " pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:37:17 crc kubenswrapper[4982]: I0224 16:37:17.027955 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l6mp\" (UniqueName: \"kubernetes.io/projected/0dddc606-a731-4220-959e-cee8dfb90a41-kube-api-access-5l6mp\") pod \"crc-debug-trkd2\" (UID: \"0dddc606-a731-4220-959e-cee8dfb90a41\") " pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:37:17 crc kubenswrapper[4982]: I0224 16:37:17.107632 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:37:17 crc kubenswrapper[4982]: I0224 16:37:17.369990 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/crc-debug-trkd2" event={"ID":"0dddc606-a731-4220-959e-cee8dfb90a41","Type":"ContainerStarted","Data":"a78e72c10c20b753f968e49025f69a94e85875ab127738ba0792d2206bec6652"} Feb 24 16:37:18 crc kubenswrapper[4982]: I0224 16:37:18.380584 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/crc-debug-trkd2" event={"ID":"0dddc606-a731-4220-959e-cee8dfb90a41","Type":"ContainerStarted","Data":"e60555db9b5a077ad6bb8b0a2a1cc29adf0e095c883bfa285f9a7abfe6b86ba8"} Feb 24 16:37:18 crc kubenswrapper[4982]: I0224 16:37:18.408382 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6bf2x/crc-debug-trkd2" podStartSLOduration=2.4083639359999998 podStartE2EDuration="2.408363936s" podCreationTimestamp="2026-02-24 16:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 16:37:18.395325481 +0000 UTC m=+6500.014383974" watchObservedRunningTime="2026-02-24 16:37:18.408363936 +0000 UTC m=+6500.027422429" Feb 24 16:37:38 crc kubenswrapper[4982]: I0224 16:37:38.738490 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:37:38 crc kubenswrapper[4982]: I0224 16:37:38.738972 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.673578 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6b2ch"] Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.679849 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.686956 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6b2ch"] Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.777321 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-catalog-content\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.777806 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9plv\" (UniqueName: \"kubernetes.io/projected/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-kube-api-access-v9plv\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.778460 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-utilities\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.880591 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9plv\" (UniqueName: \"kubernetes.io/projected/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-kube-api-access-v9plv\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.880718 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-utilities\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.880776 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-catalog-content\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.881766 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-utilities\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.882020 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-catalog-content\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:57 crc kubenswrapper[4982]: I0224 16:37:57.910852 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9plv\" (UniqueName: \"kubernetes.io/projected/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-kube-api-access-v9plv\") pod \"redhat-operators-6b2ch\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:58 crc kubenswrapper[4982]: I0224 16:37:58.014390 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:37:58 crc kubenswrapper[4982]: I0224 16:37:58.831972 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6b2ch"] Feb 24 16:37:59 crc kubenswrapper[4982]: I0224 16:37:59.830375 4982 generic.go:334] "Generic (PLEG): container finished" podID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerID="ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b" exitCode=0 Feb 24 16:37:59 crc kubenswrapper[4982]: I0224 16:37:59.830777 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b2ch" event={"ID":"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71","Type":"ContainerDied","Data":"ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b"} Feb 24 16:37:59 crc kubenswrapper[4982]: I0224 16:37:59.830812 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b2ch" event={"ID":"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71","Type":"ContainerStarted","Data":"378fe123456425d458d1bcc9dc12d350529dc97d4737127cd7746e1dbae308d0"} Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.163217 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532518-rsrcr"] Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.165219 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.168679 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.173560 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.181636 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.182884 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532518-rsrcr"] Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.239903 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkml6\" (UniqueName: \"kubernetes.io/projected/588664ef-cf4a-4df0-9bf9-be60560886c3-kube-api-access-dkml6\") pod \"auto-csr-approver-29532518-rsrcr\" (UID: \"588664ef-cf4a-4df0-9bf9-be60560886c3\") " pod="openshift-infra/auto-csr-approver-29532518-rsrcr" Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.342081 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkml6\" (UniqueName: \"kubernetes.io/projected/588664ef-cf4a-4df0-9bf9-be60560886c3-kube-api-access-dkml6\") pod \"auto-csr-approver-29532518-rsrcr\" (UID: \"588664ef-cf4a-4df0-9bf9-be60560886c3\") " pod="openshift-infra/auto-csr-approver-29532518-rsrcr" Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.364128 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkml6\" (UniqueName: \"kubernetes.io/projected/588664ef-cf4a-4df0-9bf9-be60560886c3-kube-api-access-dkml6\") pod \"auto-csr-approver-29532518-rsrcr\" (UID: \"588664ef-cf4a-4df0-9bf9-be60560886c3\") " pod="openshift-infra/auto-csr-approver-29532518-rsrcr" Feb 24 16:38:00 crc kubenswrapper[4982]: I0224 16:38:00.485700 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" Feb 24 16:38:01 crc kubenswrapper[4982]: I0224 16:38:01.005704 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532518-rsrcr"] Feb 24 16:38:01 crc kubenswrapper[4982]: W0224 16:38:01.007269 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod588664ef_cf4a_4df0_9bf9_be60560886c3.slice/crio-5aab52d9bcd3811f7e2c6bce933e44df6a327c98470ebcd1de2a1bca0cc5f88e WatchSource:0}: Error finding container 5aab52d9bcd3811f7e2c6bce933e44df6a327c98470ebcd1de2a1bca0cc5f88e: Status 404 returned error can't find the container with id 5aab52d9bcd3811f7e2c6bce933e44df6a327c98470ebcd1de2a1bca0cc5f88e Feb 24 16:38:01 crc kubenswrapper[4982]: I0224 16:38:01.853094 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b2ch" event={"ID":"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71","Type":"ContainerStarted","Data":"9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c"} Feb 24 16:38:01 crc kubenswrapper[4982]: I0224 16:38:01.854356 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" event={"ID":"588664ef-cf4a-4df0-9bf9-be60560886c3","Type":"ContainerStarted","Data":"5aab52d9bcd3811f7e2c6bce933e44df6a327c98470ebcd1de2a1bca0cc5f88e"} Feb 24 16:38:02 crc kubenswrapper[4982]: I0224 16:38:02.864885 4982 generic.go:334] "Generic (PLEG): container finished" podID="0dddc606-a731-4220-959e-cee8dfb90a41" containerID="e60555db9b5a077ad6bb8b0a2a1cc29adf0e095c883bfa285f9a7abfe6b86ba8" exitCode=0 Feb 24 16:38:02 crc kubenswrapper[4982]: I0224 16:38:02.864974 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/crc-debug-trkd2" event={"ID":"0dddc606-a731-4220-959e-cee8dfb90a41","Type":"ContainerDied","Data":"e60555db9b5a077ad6bb8b0a2a1cc29adf0e095c883bfa285f9a7abfe6b86ba8"} Feb 24 16:38:03 crc kubenswrapper[4982]: I0224 16:38:03.876783 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" event={"ID":"588664ef-cf4a-4df0-9bf9-be60560886c3","Type":"ContainerStarted","Data":"f12b1e0f6068a1c7f919758eb73621427d4468a4168f453084c09dd7b61172a6"} Feb 24 16:38:03 crc kubenswrapper[4982]: I0224 16:38:03.897709 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" podStartSLOduration=2.800584712 podStartE2EDuration="3.897694926s" podCreationTimestamp="2026-02-24 16:38:00 +0000 UTC" firstStartedPulling="2026-02-24 16:38:01.008724098 +0000 UTC m=+6542.627782591" lastFinishedPulling="2026-02-24 16:38:02.105834312 +0000 UTC m=+6543.724892805" observedRunningTime="2026-02-24 16:38:03.895040934 +0000 UTC m=+6545.514099437" watchObservedRunningTime="2026-02-24 16:38:03.897694926 +0000 UTC m=+6545.516753419" Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.028757 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.065718 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-trkd2"] Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.073116 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-trkd2"] Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.230467 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dddc606-a731-4220-959e-cee8dfb90a41-host\") pod \"0dddc606-a731-4220-959e-cee8dfb90a41\" (UID: \"0dddc606-a731-4220-959e-cee8dfb90a41\") " Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.230883 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l6mp\" (UniqueName: \"kubernetes.io/projected/0dddc606-a731-4220-959e-cee8dfb90a41-kube-api-access-5l6mp\") pod \"0dddc606-a731-4220-959e-cee8dfb90a41\" (UID: \"0dddc606-a731-4220-959e-cee8dfb90a41\") " Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.231757 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0dddc606-a731-4220-959e-cee8dfb90a41-host" (OuterVolumeSpecName: "host") pod "0dddc606-a731-4220-959e-cee8dfb90a41" (UID: "0dddc606-a731-4220-959e-cee8dfb90a41"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.246673 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dddc606-a731-4220-959e-cee8dfb90a41-kube-api-access-5l6mp" (OuterVolumeSpecName: "kube-api-access-5l6mp") pod "0dddc606-a731-4220-959e-cee8dfb90a41" (UID: "0dddc606-a731-4220-959e-cee8dfb90a41"). InnerVolumeSpecName "kube-api-access-5l6mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.334935 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0dddc606-a731-4220-959e-cee8dfb90a41-host\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.334963 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l6mp\" (UniqueName: \"kubernetes.io/projected/0dddc606-a731-4220-959e-cee8dfb90a41-kube-api-access-5l6mp\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.891873 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a78e72c10c20b753f968e49025f69a94e85875ab127738ba0792d2206bec6652" Feb 24 16:38:04 crc kubenswrapper[4982]: I0224 16:38:04.891898 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-trkd2" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.158696 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dddc606-a731-4220-959e-cee8dfb90a41" path="/var/lib/kubelet/pods/0dddc606-a731-4220-959e-cee8dfb90a41/volumes" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.408230 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-tj9nr"] Feb 24 16:38:05 crc kubenswrapper[4982]: E0224 16:38:05.408782 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dddc606-a731-4220-959e-cee8dfb90a41" containerName="container-00" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.408801 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dddc606-a731-4220-959e-cee8dfb90a41" containerName="container-00" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.409036 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dddc606-a731-4220-959e-cee8dfb90a41" containerName="container-00" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.409916 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.412069 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6bf2x"/"default-dockercfg-6jllv" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.565088 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz8tj\" (UniqueName: \"kubernetes.io/projected/fb8533f3-e973-40eb-82a5-ae6430f4d27d-kube-api-access-jz8tj\") pod \"crc-debug-tj9nr\" (UID: \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\") " pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.565519 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb8533f3-e973-40eb-82a5-ae6430f4d27d-host\") pod \"crc-debug-tj9nr\" (UID: \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\") " pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.667551 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb8533f3-e973-40eb-82a5-ae6430f4d27d-host\") pod \"crc-debug-tj9nr\" (UID: \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\") " pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.667735 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb8533f3-e973-40eb-82a5-ae6430f4d27d-host\") pod \"crc-debug-tj9nr\" (UID: \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\") " pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.667737 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz8tj\" (UniqueName: \"kubernetes.io/projected/fb8533f3-e973-40eb-82a5-ae6430f4d27d-kube-api-access-jz8tj\") pod \"crc-debug-tj9nr\" (UID: \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\") " pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.694241 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz8tj\" (UniqueName: \"kubernetes.io/projected/fb8533f3-e973-40eb-82a5-ae6430f4d27d-kube-api-access-jz8tj\") pod \"crc-debug-tj9nr\" (UID: \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\") " pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.725588 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:05 crc kubenswrapper[4982]: W0224 16:38:05.759906 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb8533f3_e973_40eb_82a5_ae6430f4d27d.slice/crio-38cf3a3438eb61847a27244a80967c7ee3b44a963227628e63ed2774b5ad1450 WatchSource:0}: Error finding container 38cf3a3438eb61847a27244a80967c7ee3b44a963227628e63ed2774b5ad1450: Status 404 returned error can't find the container with id 38cf3a3438eb61847a27244a80967c7ee3b44a963227628e63ed2774b5ad1450 Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.902425 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" event={"ID":"fb8533f3-e973-40eb-82a5-ae6430f4d27d","Type":"ContainerStarted","Data":"38cf3a3438eb61847a27244a80967c7ee3b44a963227628e63ed2774b5ad1450"} Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.904411 4982 generic.go:334] "Generic (PLEG): container finished" podID="588664ef-cf4a-4df0-9bf9-be60560886c3" containerID="f12b1e0f6068a1c7f919758eb73621427d4468a4168f453084c09dd7b61172a6" exitCode=0 Feb 24 16:38:05 crc kubenswrapper[4982]: I0224 16:38:05.904448 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" event={"ID":"588664ef-cf4a-4df0-9bf9-be60560886c3","Type":"ContainerDied","Data":"f12b1e0f6068a1c7f919758eb73621427d4468a4168f453084c09dd7b61172a6"} Feb 24 16:38:06 crc kubenswrapper[4982]: I0224 16:38:06.931023 4982 generic.go:334] "Generic (PLEG): container finished" podID="fb8533f3-e973-40eb-82a5-ae6430f4d27d" containerID="271c6f1eddd8e057ed6e36bfb50fcdd11766d9a4f25e55167e737aa299802544" exitCode=0 Feb 24 16:38:06 crc kubenswrapper[4982]: I0224 16:38:06.931616 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" event={"ID":"fb8533f3-e973-40eb-82a5-ae6430f4d27d","Type":"ContainerDied","Data":"271c6f1eddd8e057ed6e36bfb50fcdd11766d9a4f25e55167e737aa299802544"} Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.682897 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.817479 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkml6\" (UniqueName: \"kubernetes.io/projected/588664ef-cf4a-4df0-9bf9-be60560886c3-kube-api-access-dkml6\") pod \"588664ef-cf4a-4df0-9bf9-be60560886c3\" (UID: \"588664ef-cf4a-4df0-9bf9-be60560886c3\") " Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.841688 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588664ef-cf4a-4df0-9bf9-be60560886c3-kube-api-access-dkml6" (OuterVolumeSpecName: "kube-api-access-dkml6") pod "588664ef-cf4a-4df0-9bf9-be60560886c3" (UID: "588664ef-cf4a-4df0-9bf9-be60560886c3"). InnerVolumeSpecName "kube-api-access-dkml6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.920310 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkml6\" (UniqueName: \"kubernetes.io/projected/588664ef-cf4a-4df0-9bf9-be60560886c3-kube-api-access-dkml6\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.945682 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" event={"ID":"588664ef-cf4a-4df0-9bf9-be60560886c3","Type":"ContainerDied","Data":"5aab52d9bcd3811f7e2c6bce933e44df6a327c98470ebcd1de2a1bca0cc5f88e"} Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.945744 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aab52d9bcd3811f7e2c6bce933e44df6a327c98470ebcd1de2a1bca0cc5f88e" Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.945835 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532518-rsrcr" Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.950981 4982 generic.go:334] "Generic (PLEG): container finished" podID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerID="9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c" exitCode=0 Feb 24 16:38:07 crc kubenswrapper[4982]: I0224 16:38:07.951228 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b2ch" event={"ID":"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71","Type":"ContainerDied","Data":"9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c"} Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.001456 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532512-grt82"] Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.017256 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532512-grt82"] Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.070904 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.228522 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb8533f3-e973-40eb-82a5-ae6430f4d27d-host\") pod \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\" (UID: \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\") " Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.228675 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz8tj\" (UniqueName: \"kubernetes.io/projected/fb8533f3-e973-40eb-82a5-ae6430f4d27d-kube-api-access-jz8tj\") pod \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\" (UID: \"fb8533f3-e973-40eb-82a5-ae6430f4d27d\") " Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.228872 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb8533f3-e973-40eb-82a5-ae6430f4d27d-host" (OuterVolumeSpecName: "host") pod "fb8533f3-e973-40eb-82a5-ae6430f4d27d" (UID: "fb8533f3-e973-40eb-82a5-ae6430f4d27d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.229340 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb8533f3-e973-40eb-82a5-ae6430f4d27d-host\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.239404 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8533f3-e973-40eb-82a5-ae6430f4d27d-kube-api-access-jz8tj" (OuterVolumeSpecName: "kube-api-access-jz8tj") pod "fb8533f3-e973-40eb-82a5-ae6430f4d27d" (UID: "fb8533f3-e973-40eb-82a5-ae6430f4d27d"). InnerVolumeSpecName "kube-api-access-jz8tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.331145 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz8tj\" (UniqueName: \"kubernetes.io/projected/fb8533f3-e973-40eb-82a5-ae6430f4d27d-kube-api-access-jz8tj\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.737689 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.737741 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.963619 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b2ch" event={"ID":"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71","Type":"ContainerStarted","Data":"2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768"} Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.965411 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" event={"ID":"fb8533f3-e973-40eb-82a5-ae6430f4d27d","Type":"ContainerDied","Data":"38cf3a3438eb61847a27244a80967c7ee3b44a963227628e63ed2774b5ad1450"} Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.965694 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38cf3a3438eb61847a27244a80967c7ee3b44a963227628e63ed2774b5ad1450" Feb 24 16:38:08 crc kubenswrapper[4982]: I0224 16:38:08.965490 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-tj9nr" Feb 24 16:38:09 crc kubenswrapper[4982]: I0224 16:38:09.001054 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6b2ch" podStartSLOduration=3.481289421 podStartE2EDuration="12.001033909s" podCreationTimestamp="2026-02-24 16:37:57 +0000 UTC" firstStartedPulling="2026-02-24 16:37:59.836883951 +0000 UTC m=+6541.455942444" lastFinishedPulling="2026-02-24 16:38:08.356628439 +0000 UTC m=+6549.975686932" observedRunningTime="2026-02-24 16:38:08.986102514 +0000 UTC m=+6550.605161017" watchObservedRunningTime="2026-02-24 16:38:09.001033909 +0000 UTC m=+6550.620092402" Feb 24 16:38:09 crc kubenswrapper[4982]: I0224 16:38:09.158360 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1" path="/var/lib/kubelet/pods/c4d9b8e8-3f51-4e56-8a2a-7f1f419858e1/volumes" Feb 24 16:38:09 crc kubenswrapper[4982]: I0224 16:38:09.219368 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-tj9nr"] Feb 24 16:38:09 crc kubenswrapper[4982]: I0224 16:38:09.225111 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-tj9nr"] Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.420244 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-bj6ct"] Feb 24 16:38:10 crc kubenswrapper[4982]: E0224 16:38:10.421006 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8533f3-e973-40eb-82a5-ae6430f4d27d" containerName="container-00" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.421018 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8533f3-e973-40eb-82a5-ae6430f4d27d" containerName="container-00" Feb 24 16:38:10 crc kubenswrapper[4982]: E0224 16:38:10.421053 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588664ef-cf4a-4df0-9bf9-be60560886c3" containerName="oc" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.421059 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="588664ef-cf4a-4df0-9bf9-be60560886c3" containerName="oc" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.421259 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8533f3-e973-40eb-82a5-ae6430f4d27d" containerName="container-00" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.421273 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="588664ef-cf4a-4df0-9bf9-be60560886c3" containerName="oc" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.422069 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.424792 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6bf2x"/"default-dockercfg-6jllv" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.577442 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l4tc\" (UniqueName: \"kubernetes.io/projected/636c2357-a6c5-4e69-b560-bbbec3b70739-kube-api-access-4l4tc\") pod \"crc-debug-bj6ct\" (UID: \"636c2357-a6c5-4e69-b560-bbbec3b70739\") " pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.577663 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/636c2357-a6c5-4e69-b560-bbbec3b70739-host\") pod \"crc-debug-bj6ct\" (UID: \"636c2357-a6c5-4e69-b560-bbbec3b70739\") " pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.680346 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/636c2357-a6c5-4e69-b560-bbbec3b70739-host\") pod \"crc-debug-bj6ct\" (UID: \"636c2357-a6c5-4e69-b560-bbbec3b70739\") " pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.680487 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/636c2357-a6c5-4e69-b560-bbbec3b70739-host\") pod \"crc-debug-bj6ct\" (UID: \"636c2357-a6c5-4e69-b560-bbbec3b70739\") " pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.680609 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l4tc\" (UniqueName: \"kubernetes.io/projected/636c2357-a6c5-4e69-b560-bbbec3b70739-kube-api-access-4l4tc\") pod \"crc-debug-bj6ct\" (UID: \"636c2357-a6c5-4e69-b560-bbbec3b70739\") " pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.703484 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l4tc\" (UniqueName: \"kubernetes.io/projected/636c2357-a6c5-4e69-b560-bbbec3b70739-kube-api-access-4l4tc\") pod \"crc-debug-bj6ct\" (UID: \"636c2357-a6c5-4e69-b560-bbbec3b70739\") " pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.738128 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:10 crc kubenswrapper[4982]: W0224 16:38:10.768563 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636c2357_a6c5_4e69_b560_bbbec3b70739.slice/crio-b3f2bf24c0208bef110f7da15eedb22cb1ab71367dc01574201feb42d47a5126 WatchSource:0}: Error finding container b3f2bf24c0208bef110f7da15eedb22cb1ab71367dc01574201feb42d47a5126: Status 404 returned error can't find the container with id b3f2bf24c0208bef110f7da15eedb22cb1ab71367dc01574201feb42d47a5126 Feb 24 16:38:10 crc kubenswrapper[4982]: I0224 16:38:10.987514 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" event={"ID":"636c2357-a6c5-4e69-b560-bbbec3b70739","Type":"ContainerStarted","Data":"b3f2bf24c0208bef110f7da15eedb22cb1ab71367dc01574201feb42d47a5126"} Feb 24 16:38:11 crc kubenswrapper[4982]: I0224 16:38:11.159265 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8533f3-e973-40eb-82a5-ae6430f4d27d" path="/var/lib/kubelet/pods/fb8533f3-e973-40eb-82a5-ae6430f4d27d/volumes" Feb 24 16:38:11 crc kubenswrapper[4982]: I0224 16:38:11.997330 4982 generic.go:334] "Generic (PLEG): container finished" podID="636c2357-a6c5-4e69-b560-bbbec3b70739" containerID="725404b95ee4db42b03332c5ee03f6d412dcd617261a12b794f6250e609449cb" exitCode=0 Feb 24 16:38:11 crc kubenswrapper[4982]: I0224 16:38:11.997372 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" event={"ID":"636c2357-a6c5-4e69-b560-bbbec3b70739","Type":"ContainerDied","Data":"725404b95ee4db42b03332c5ee03f6d412dcd617261a12b794f6250e609449cb"} Feb 24 16:38:12 crc kubenswrapper[4982]: I0224 16:38:12.053847 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-bj6ct"] Feb 24 16:38:12 crc kubenswrapper[4982]: I0224 16:38:12.067083 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bf2x/crc-debug-bj6ct"] Feb 24 16:38:13 crc kubenswrapper[4982]: I0224 16:38:13.122802 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:13 crc kubenswrapper[4982]: I0224 16:38:13.236650 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/636c2357-a6c5-4e69-b560-bbbec3b70739-host\") pod \"636c2357-a6c5-4e69-b560-bbbec3b70739\" (UID: \"636c2357-a6c5-4e69-b560-bbbec3b70739\") " Feb 24 16:38:13 crc kubenswrapper[4982]: I0224 16:38:13.236731 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l4tc\" (UniqueName: \"kubernetes.io/projected/636c2357-a6c5-4e69-b560-bbbec3b70739-kube-api-access-4l4tc\") pod \"636c2357-a6c5-4e69-b560-bbbec3b70739\" (UID: \"636c2357-a6c5-4e69-b560-bbbec3b70739\") " Feb 24 16:38:13 crc kubenswrapper[4982]: I0224 16:38:13.236750 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/636c2357-a6c5-4e69-b560-bbbec3b70739-host" (OuterVolumeSpecName: "host") pod "636c2357-a6c5-4e69-b560-bbbec3b70739" (UID: "636c2357-a6c5-4e69-b560-bbbec3b70739"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 16:38:13 crc kubenswrapper[4982]: I0224 16:38:13.237694 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/636c2357-a6c5-4e69-b560-bbbec3b70739-host\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:13 crc kubenswrapper[4982]: I0224 16:38:13.249710 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636c2357-a6c5-4e69-b560-bbbec3b70739-kube-api-access-4l4tc" (OuterVolumeSpecName: "kube-api-access-4l4tc") pod "636c2357-a6c5-4e69-b560-bbbec3b70739" (UID: "636c2357-a6c5-4e69-b560-bbbec3b70739"). InnerVolumeSpecName "kube-api-access-4l4tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:38:13 crc kubenswrapper[4982]: I0224 16:38:13.340542 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l4tc\" (UniqueName: \"kubernetes.io/projected/636c2357-a6c5-4e69-b560-bbbec3b70739-kube-api-access-4l4tc\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:14 crc kubenswrapper[4982]: I0224 16:38:14.024308 4982 scope.go:117] "RemoveContainer" containerID="725404b95ee4db42b03332c5ee03f6d412dcd617261a12b794f6250e609449cb" Feb 24 16:38:14 crc kubenswrapper[4982]: I0224 16:38:14.024355 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/crc-debug-bj6ct" Feb 24 16:38:15 crc kubenswrapper[4982]: I0224 16:38:15.157364 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636c2357-a6c5-4e69-b560-bbbec3b70739" path="/var/lib/kubelet/pods/636c2357-a6c5-4e69-b560-bbbec3b70739/volumes" Feb 24 16:38:18 crc kubenswrapper[4982]: I0224 16:38:18.014741 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:38:18 crc kubenswrapper[4982]: I0224 16:38:18.016325 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:38:19 crc kubenswrapper[4982]: I0224 16:38:19.069117 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6b2ch" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="registry-server" probeResult="failure" output=< Feb 24 16:38:19 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:38:19 crc kubenswrapper[4982]: > Feb 24 16:38:29 crc kubenswrapper[4982]: I0224 16:38:29.082294 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6b2ch" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="registry-server" probeResult="failure" output=< Feb 24 16:38:29 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:38:29 crc kubenswrapper[4982]: > Feb 24 16:38:38 crc kubenswrapper[4982]: I0224 16:38:38.738172 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:38:38 crc kubenswrapper[4982]: I0224 16:38:38.738745 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:38:38 crc kubenswrapper[4982]: I0224 16:38:38.738790 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 16:38:38 crc kubenswrapper[4982]: I0224 16:38:38.739704 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ace8224ab7a5a8617509839dcd2d4d3ee3d53ce07edc0028d24bce3f1f97c1b"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 16:38:38 crc kubenswrapper[4982]: I0224 16:38:38.739829 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://8ace8224ab7a5a8617509839dcd2d4d3ee3d53ce07edc0028d24bce3f1f97c1b" gracePeriod=600 Feb 24 16:38:39 crc kubenswrapper[4982]: I0224 16:38:39.060053 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6b2ch" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="registry-server" probeResult="failure" output=< Feb 24 16:38:39 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:38:39 crc kubenswrapper[4982]: > Feb 24 16:38:39 crc kubenswrapper[4982]: I0224 16:38:39.313334 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="8ace8224ab7a5a8617509839dcd2d4d3ee3d53ce07edc0028d24bce3f1f97c1b" exitCode=0 Feb 24 16:38:39 crc kubenswrapper[4982]: I0224 16:38:39.313375 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"8ace8224ab7a5a8617509839dcd2d4d3ee3d53ce07edc0028d24bce3f1f97c1b"} Feb 24 16:38:39 crc kubenswrapper[4982]: I0224 16:38:39.313407 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50"} Feb 24 16:38:39 crc kubenswrapper[4982]: I0224 16:38:39.313424 4982 scope.go:117] "RemoveContainer" containerID="a4ca168d04a22349f415a48b4dc90125df16cee21c459271de45ee02c9617ebe" Feb 24 16:38:48 crc kubenswrapper[4982]: I0224 16:38:48.100291 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:38:48 crc kubenswrapper[4982]: I0224 16:38:48.179138 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:38:48 crc kubenswrapper[4982]: I0224 16:38:48.355783 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6b2ch"] Feb 24 16:38:49 crc kubenswrapper[4982]: I0224 16:38:49.415160 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6b2ch" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="registry-server" containerID="cri-o://2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768" gracePeriod=2 Feb 24 16:38:49 crc kubenswrapper[4982]: I0224 16:38:49.995040 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.020148 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9plv\" (UniqueName: \"kubernetes.io/projected/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-kube-api-access-v9plv\") pod \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.020580 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-catalog-content\") pod \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.020849 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-utilities\") pod \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\" (UID: \"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71\") " Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.022855 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-utilities" (OuterVolumeSpecName: "utilities") pod "bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" (UID: "bc9c8bb4-1509-4f29-9bef-dee2e84a2d71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.034723 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-kube-api-access-v9plv" (OuterVolumeSpecName: "kube-api-access-v9plv") pod "bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" (UID: "bc9c8bb4-1509-4f29-9bef-dee2e84a2d71"). InnerVolumeSpecName "kube-api-access-v9plv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.124439 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.124471 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9plv\" (UniqueName: \"kubernetes.io/projected/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-kube-api-access-v9plv\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.140938 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" (UID: "bc9c8bb4-1509-4f29-9bef-dee2e84a2d71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.225315 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.434744 4982 generic.go:334] "Generic (PLEG): container finished" podID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerID="2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768" exitCode=0 Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.434873 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b2ch" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.434896 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b2ch" event={"ID":"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71","Type":"ContainerDied","Data":"2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768"} Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.438705 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b2ch" event={"ID":"bc9c8bb4-1509-4f29-9bef-dee2e84a2d71","Type":"ContainerDied","Data":"378fe123456425d458d1bcc9dc12d350529dc97d4737127cd7746e1dbae308d0"} Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.438750 4982 scope.go:117] "RemoveContainer" containerID="2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.477469 4982 scope.go:117] "RemoveContainer" containerID="9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.502108 4982 scope.go:117] "RemoveContainer" containerID="ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.507844 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6b2ch"] Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.516890 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6b2ch"] Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.571617 4982 scope.go:117] "RemoveContainer" containerID="2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768" Feb 24 16:38:50 crc kubenswrapper[4982]: E0224 16:38:50.575248 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768\": container with ID starting with 2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768 not found: ID does not exist" containerID="2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.575340 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768"} err="failed to get container status \"2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768\": rpc error: code = NotFound desc = could not find container \"2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768\": container with ID starting with 2f8eea14d477e9b4953a939fa130a10da05c7fb0621b8a379f73949934c33768 not found: ID does not exist" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.575377 4982 scope.go:117] "RemoveContainer" containerID="9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c" Feb 24 16:38:50 crc kubenswrapper[4982]: E0224 16:38:50.576001 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c\": container with ID starting with 9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c not found: ID does not exist" containerID="9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.576089 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c"} err="failed to get container status \"9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c\": rpc error: code = NotFound desc = could not find container \"9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c\": container with ID starting with 9d307c541e1f95dc467064483b41e380d1ee15d8c66ba3214c252ffa29eeb38c not found: ID does not exist" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.576161 4982 scope.go:117] "RemoveContainer" containerID="ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b" Feb 24 16:38:50 crc kubenswrapper[4982]: E0224 16:38:50.576596 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b\": container with ID starting with ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b not found: ID does not exist" containerID="ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b" Feb 24 16:38:50 crc kubenswrapper[4982]: I0224 16:38:50.576647 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b"} err="failed to get container status \"ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b\": rpc error: code = NotFound desc = could not find container \"ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b\": container with ID starting with ea4154c6d5dc180568f5c82cb19f3da0629cb1fa38453180edca2105190d0d3b not found: ID does not exist" Feb 24 16:38:51 crc kubenswrapper[4982]: I0224 16:38:51.161847 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" path="/var/lib/kubelet/pods/bc9c8bb4-1509-4f29-9bef-dee2e84a2d71/volumes" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.795056 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjh9"] Feb 24 16:38:52 crc kubenswrapper[4982]: E0224 16:38:52.796334 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="extract-content" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.796359 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="extract-content" Feb 24 16:38:52 crc kubenswrapper[4982]: E0224 16:38:52.796375 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="registry-server" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.796387 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="registry-server" Feb 24 16:38:52 crc kubenswrapper[4982]: E0224 16:38:52.796406 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="extract-utilities" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.796418 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="extract-utilities" Feb 24 16:38:52 crc kubenswrapper[4982]: E0224 16:38:52.796445 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636c2357-a6c5-4e69-b560-bbbec3b70739" containerName="container-00" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.796457 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="636c2357-a6c5-4e69-b560-bbbec3b70739" containerName="container-00" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.796890 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="636c2357-a6c5-4e69-b560-bbbec3b70739" containerName="container-00" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.796935 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9c8bb4-1509-4f29-9bef-dee2e84a2d71" containerName="registry-server" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.799647 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.818267 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjh9"] Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.905324 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-catalog-content\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.905706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlf9h\" (UniqueName: \"kubernetes.io/projected/5cabc55b-125c-43cd-8981-c9c448e7a16e-kube-api-access-hlf9h\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:52 crc kubenswrapper[4982]: I0224 16:38:52.905964 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-utilities\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:53 crc kubenswrapper[4982]: I0224 16:38:53.008573 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-utilities\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:53 crc kubenswrapper[4982]: I0224 16:38:53.009199 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-utilities\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:53 crc kubenswrapper[4982]: I0224 16:38:53.009580 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-catalog-content\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:53 crc kubenswrapper[4982]: I0224 16:38:53.009795 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlf9h\" (UniqueName: \"kubernetes.io/projected/5cabc55b-125c-43cd-8981-c9c448e7a16e-kube-api-access-hlf9h\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:53 crc kubenswrapper[4982]: I0224 16:38:53.010064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-catalog-content\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:53 crc kubenswrapper[4982]: I0224 16:38:53.030014 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlf9h\" (UniqueName: \"kubernetes.io/projected/5cabc55b-125c-43cd-8981-c9c448e7a16e-kube-api-access-hlf9h\") pod \"redhat-marketplace-mjjh9\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:53 crc kubenswrapper[4982]: I0224 16:38:53.128172 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:38:53 crc kubenswrapper[4982]: I0224 16:38:53.685325 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjh9"] Feb 24 16:38:54 crc kubenswrapper[4982]: I0224 16:38:54.491825 4982 generic.go:334] "Generic (PLEG): container finished" podID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerID="ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c" exitCode=0 Feb 24 16:38:54 crc kubenswrapper[4982]: I0224 16:38:54.491921 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjh9" event={"ID":"5cabc55b-125c-43cd-8981-c9c448e7a16e","Type":"ContainerDied","Data":"ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c"} Feb 24 16:38:54 crc kubenswrapper[4982]: I0224 16:38:54.492116 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjh9" event={"ID":"5cabc55b-125c-43cd-8981-c9c448e7a16e","Type":"ContainerStarted","Data":"4b653266a749345e3192fd0fa48d1199a239101107e666967039e2a2668f27f8"} Feb 24 16:38:55 crc kubenswrapper[4982]: I0224 16:38:55.504066 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjh9" event={"ID":"5cabc55b-125c-43cd-8981-c9c448e7a16e","Type":"ContainerStarted","Data":"5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3"} Feb 24 16:38:56 crc kubenswrapper[4982]: E0224 16:38:56.350938 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cabc55b_125c_43cd_8981_c9c448e7a16e.slice/crio-5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cabc55b_125c_43cd_8981_c9c448e7a16e.slice/crio-conmon-5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3.scope\": RecentStats: unable to find data in memory cache]" Feb 24 16:38:56 crc kubenswrapper[4982]: I0224 16:38:56.514296 4982 generic.go:334] "Generic (PLEG): container finished" podID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerID="5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3" exitCode=0 Feb 24 16:38:56 crc kubenswrapper[4982]: I0224 16:38:56.514343 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjh9" event={"ID":"5cabc55b-125c-43cd-8981-c9c448e7a16e","Type":"ContainerDied","Data":"5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3"} Feb 24 16:38:57 crc kubenswrapper[4982]: I0224 16:38:57.271282 4982 scope.go:117] "RemoveContainer" containerID="ac94e7cbac225dff8692c0ebc5b6e483e143bdb73e39331ee61cea50712da2c5" Feb 24 16:38:57 crc kubenswrapper[4982]: I0224 16:38:57.528250 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjh9" event={"ID":"5cabc55b-125c-43cd-8981-c9c448e7a16e","Type":"ContainerStarted","Data":"24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca"} Feb 24 16:38:57 crc kubenswrapper[4982]: I0224 16:38:57.552583 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mjjh9" podStartSLOduration=3.14268088 podStartE2EDuration="5.552563217s" podCreationTimestamp="2026-02-24 16:38:52 +0000 UTC" firstStartedPulling="2026-02-24 16:38:54.49388831 +0000 UTC m=+6596.112946803" lastFinishedPulling="2026-02-24 16:38:56.903770647 +0000 UTC m=+6598.522829140" observedRunningTime="2026-02-24 16:38:57.546105152 +0000 UTC m=+6599.165163645" watchObservedRunningTime="2026-02-24 16:38:57.552563217 +0000 UTC m=+6599.171621700" Feb 24 16:38:58 crc kubenswrapper[4982]: I0224 16:38:58.422438 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7b46506f-5421-47ab-9ed9-2328c663adb8/aodh-api/0.log" Feb 24 16:38:58 crc kubenswrapper[4982]: I0224 16:38:58.658740 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7b46506f-5421-47ab-9ed9-2328c663adb8/aodh-listener/0.log" Feb 24 16:38:58 crc kubenswrapper[4982]: I0224 16:38:58.730024 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7b46506f-5421-47ab-9ed9-2328c663adb8/aodh-evaluator/0.log" Feb 24 16:38:58 crc kubenswrapper[4982]: I0224 16:38:58.763251 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_7b46506f-5421-47ab-9ed9-2328c663adb8/aodh-notifier/0.log" Feb 24 16:38:58 crc kubenswrapper[4982]: I0224 16:38:58.934123 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5466bd89fd-vjk4t_ba8a29e9-d071-470e-80eb-8c749b582614/barbican-api/0.log" Feb 24 16:38:58 crc kubenswrapper[4982]: I0224 16:38:58.982864 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5466bd89fd-vjk4t_ba8a29e9-d071-470e-80eb-8c749b582614/barbican-api-log/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.119137 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77797c9b78-rvhwv_17fd2f10-64fe-4299-9f45-b81e02687f53/barbican-keystone-listener/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.415736 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77f5949dd7-rps9v_7334fe93-50f7-486c-a11a-1cac15b026da/barbican-worker/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.418076 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77797c9b78-rvhwv_17fd2f10-64fe-4299-9f45-b81e02687f53/barbican-keystone-listener-log/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.522375 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77f5949dd7-rps9v_7334fe93-50f7-486c-a11a-1cac15b026da/barbican-worker-log/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.659860 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wkddc_ff453eac-e860-4c72-9c3c-0aa80e0554d1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.857963 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f9dd449a-d430-42cf-8d1a-492c750fde59/ceilometer-central-agent/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.907610 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f9dd449a-d430-42cf-8d1a-492c750fde59/proxy-httpd/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.925134 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f9dd449a-d430-42cf-8d1a-492c750fde59/ceilometer-notification-agent/0.log" Feb 24 16:38:59 crc kubenswrapper[4982]: I0224 16:38:59.948879 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_f9dd449a-d430-42cf-8d1a-492c750fde59/sg-core/0.log" Feb 24 16:39:00 crc kubenswrapper[4982]: I0224 16:39:00.158388 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0aa9e47a-4c17-47f4-9541-60b8f91236fd/cinder-api-log/0.log" Feb 24 16:39:00 crc kubenswrapper[4982]: I0224 16:39:00.238074 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0aa9e47a-4c17-47f4-9541-60b8f91236fd/cinder-api/0.log" Feb 24 16:39:00 crc kubenswrapper[4982]: I0224 16:39:00.604612 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9e704a92-af74-4bf8-bf2f-3d684b08a722/probe/0.log" Feb 24 16:39:00 crc kubenswrapper[4982]: I0224 16:39:00.632290 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9e704a92-af74-4bf8-bf2f-3d684b08a722/cinder-scheduler/0.log" Feb 24 16:39:00 crc kubenswrapper[4982]: I0224 16:39:00.730873 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hwnjv_56e1fd8e-5bca-42ee-8dc3-aa639d1f9fd5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:00 crc kubenswrapper[4982]: I0224 16:39:00.862103 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xgn4g_17fad281-67b9-48ec-b6a3-82ecd730d659/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:01 crc kubenswrapper[4982]: I0224 16:39:01.008398 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-82n9c_4c978f3a-f7d2-4a33-a206-b38bf80aae1f/init/0.log" Feb 24 16:39:01 crc kubenswrapper[4982]: I0224 16:39:01.171233 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-82n9c_4c978f3a-f7d2-4a33-a206-b38bf80aae1f/init/0.log" Feb 24 16:39:01 crc kubenswrapper[4982]: I0224 16:39:01.239753 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-82n9c_4c978f3a-f7d2-4a33-a206-b38bf80aae1f/dnsmasq-dns/0.log" Feb 24 16:39:01 crc kubenswrapper[4982]: I0224 16:39:01.289380 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rhmm2_b7210463-d869-4f1c-8d7b-60985c525f58/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:01 crc kubenswrapper[4982]: I0224 16:39:01.459397 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_dfd138cb-0c04-4683-9af6-b623fb39c84f/glance-log/0.log" Feb 24 16:39:01 crc kubenswrapper[4982]: I0224 16:39:01.501144 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_dfd138cb-0c04-4683-9af6-b623fb39c84f/glance-httpd/0.log" Feb 24 16:39:01 crc kubenswrapper[4982]: I0224 16:39:01.685936 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_62fed6a9-a36d-485f-bedc-cb54f1dad363/glance-log/0.log" Feb 24 16:39:01 crc kubenswrapper[4982]: I0224 16:39:01.689531 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_62fed6a9-a36d-485f-bedc-cb54f1dad363/glance-httpd/0.log" Feb 24 16:39:02 crc kubenswrapper[4982]: I0224 16:39:02.327099 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-8fbqv_6a256c5e-deb6-4688-a9d7-080502e07609/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:02 crc kubenswrapper[4982]: I0224 16:39:02.339692 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-847b6bf986-6knzb_ea93c40b-bf1e-433e-8782-fcb3781962a7/heat-engine/0.log" Feb 24 16:39:02 crc kubenswrapper[4982]: I0224 16:39:02.562287 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-779fc6f99c-lz6wq_f7257c07-0ab1-4030-9c40-c942d0de78f9/heat-api/0.log" Feb 24 16:39:02 crc kubenswrapper[4982]: I0224 16:39:02.566773 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2zfwl_1b676930-50c4-4caa-aeba-85814ae02e3a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:02 crc kubenswrapper[4982]: I0224 16:39:02.621484 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-66f8dcd8d5-cmr9n_533504d0-9d95-4bbd-8c8e-10696cd115a6/heat-cfnapi/0.log" Feb 24 16:39:02 crc kubenswrapper[4982]: I0224 16:39:02.804381 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29532481-7srlf_e79089ba-59a1-4b26-b33f-9f4b5406d41e/keystone-cron/0.log" Feb 24 16:39:02 crc kubenswrapper[4982]: I0224 16:39:02.897230 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ddb94506-62fc-4912-92e6-6d37a079eba1/kube-state-metrics/0.log" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.129320 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.130263 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.146355 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-wx72q_dd768711-30f7-4520-845c-c2f7c45a7c6b/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.179143 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5jzlx_1b824030-74f0-4482-b022-6c9cc5e52aac/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.188023 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.348658 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-66dcfd46c4-xzlpx_69f6a530-6cab-4af8-a122-2648213a4c8b/keystone-api/0.log" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.652078 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.702277 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_920bb33f-bbf4-4a58-bfac-ce0d9eda6001/mysqld-exporter/0.log" Feb 24 16:39:03 crc kubenswrapper[4982]: I0224 16:39:03.710695 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjh9"] Feb 24 16:39:04 crc kubenswrapper[4982]: I0224 16:39:04.095628 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-nsqqn_c374a6b9-31c3-45f7-a188-ec0dc5df244d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:04 crc kubenswrapper[4982]: I0224 16:39:04.137543 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f7b97458f-lsgqv_ace3b91f-7d2e-405d-a191-1260b2def481/neutron-httpd/0.log" Feb 24 16:39:04 crc kubenswrapper[4982]: I0224 16:39:04.243009 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f7b97458f-lsgqv_ace3b91f-7d2e-405d-a191-1260b2def481/neutron-api/0.log" Feb 24 16:39:04 crc kubenswrapper[4982]: I0224 16:39:04.723267 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_802ee268-f1cc-4138-8361-c61c4b2d005a/nova-cell0-conductor-conductor/0.log" Feb 24 16:39:04 crc kubenswrapper[4982]: I0224 16:39:04.995044 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aac85fdb-fa1c-47c7-8904-d72aa10f69ae/nova-api-log/0.log" Feb 24 16:39:05 crc kubenswrapper[4982]: I0224 16:39:05.136786 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_67c35239-eb72-4ef1-b22a-be4ea3374b3c/nova-cell1-conductor-conductor/0.log" Feb 24 16:39:05 crc kubenswrapper[4982]: I0224 16:39:05.417060 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qjzqt_de5e4e7c-74db-408c-bad2-18f9d3bdfeb7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:05 crc kubenswrapper[4982]: I0224 16:39:05.446204 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_19052f4e-3c96-4cf0-82f6-be3740f6a857/nova-cell1-novncproxy-novncproxy/0.log" Feb 24 16:39:05 crc kubenswrapper[4982]: I0224 16:39:05.580924 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_aac85fdb-fa1c-47c7-8904-d72aa10f69ae/nova-api-api/0.log" Feb 24 16:39:05 crc kubenswrapper[4982]: I0224 16:39:05.600792 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mjjh9" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerName="registry-server" containerID="cri-o://24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca" gracePeriod=2 Feb 24 16:39:05 crc kubenswrapper[4982]: I0224 16:39:05.738048 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aa8b9e96-e44a-4a46-87c6-0a473fc97e22/nova-metadata-log/0.log" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.063507 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_92994661-7f5e-4171-9e13-f725269e3475/mysql-bootstrap/0.log" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.189534 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.226468 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0c763ae1-e628-4088-b3da-1a4392f1cb37/nova-scheduler-scheduler/0.log" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.302595 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-utilities\") pod \"5cabc55b-125c-43cd-8981-c9c448e7a16e\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.302641 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-catalog-content\") pod \"5cabc55b-125c-43cd-8981-c9c448e7a16e\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.302674 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlf9h\" (UniqueName: \"kubernetes.io/projected/5cabc55b-125c-43cd-8981-c9c448e7a16e-kube-api-access-hlf9h\") pod \"5cabc55b-125c-43cd-8981-c9c448e7a16e\" (UID: \"5cabc55b-125c-43cd-8981-c9c448e7a16e\") " Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.304181 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-utilities" (OuterVolumeSpecName: "utilities") pod "5cabc55b-125c-43cd-8981-c9c448e7a16e" (UID: "5cabc55b-125c-43cd-8981-c9c448e7a16e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.311662 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cabc55b-125c-43cd-8981-c9c448e7a16e-kube-api-access-hlf9h" (OuterVolumeSpecName: "kube-api-access-hlf9h") pod "5cabc55b-125c-43cd-8981-c9c448e7a16e" (UID: "5cabc55b-125c-43cd-8981-c9c448e7a16e"). InnerVolumeSpecName "kube-api-access-hlf9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.324252 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cabc55b-125c-43cd-8981-c9c448e7a16e" (UID: "5cabc55b-125c-43cd-8981-c9c448e7a16e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.326398 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_92994661-7f5e-4171-9e13-f725269e3475/mysql-bootstrap/0.log" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.331345 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_92994661-7f5e-4171-9e13-f725269e3475/galera/0.log" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.405954 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.406212 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cabc55b-125c-43cd-8981-c9c448e7a16e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.406224 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlf9h\" (UniqueName: \"kubernetes.io/projected/5cabc55b-125c-43cd-8981-c9c448e7a16e-kube-api-access-hlf9h\") on node \"crc\" DevicePath \"\"" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.575624 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e9dd965-1448-4801-9871-b6d949a7e1e7/mysql-bootstrap/0.log" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.613048 4982 generic.go:334] "Generic (PLEG): container finished" podID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerID="24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca" exitCode=0 Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.613090 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjh9" event={"ID":"5cabc55b-125c-43cd-8981-c9c448e7a16e","Type":"ContainerDied","Data":"24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca"} Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.613116 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjjh9" event={"ID":"5cabc55b-125c-43cd-8981-c9c448e7a16e","Type":"ContainerDied","Data":"4b653266a749345e3192fd0fa48d1199a239101107e666967039e2a2668f27f8"} Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.613132 4982 scope.go:117] "RemoveContainer" containerID="24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.613259 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjjh9" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.649670 4982 scope.go:117] "RemoveContainer" containerID="5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.657140 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjh9"] Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.673649 4982 scope.go:117] "RemoveContainer" containerID="ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.674938 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjjh9"] Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.722246 4982 scope.go:117] "RemoveContainer" containerID="24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca" Feb 24 16:39:06 crc kubenswrapper[4982]: E0224 16:39:06.725632 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca\": container with ID starting with 24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca not found: ID does not exist" containerID="24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.725691 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca"} err="failed to get container status \"24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca\": rpc error: code = NotFound desc = could not find container \"24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca\": container with ID starting with 24cc4d6068ad4005cd1a8749bb2587091b24a3974c242924b86a356a918d77ca not found: ID does not exist" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.725713 4982 scope.go:117] "RemoveContainer" containerID="5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3" Feb 24 16:39:06 crc kubenswrapper[4982]: E0224 16:39:06.726058 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3\": container with ID starting with 5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3 not found: ID does not exist" containerID="5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.726085 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3"} err="failed to get container status \"5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3\": rpc error: code = NotFound desc = could not find container \"5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3\": container with ID starting with 5905a966ca0e2cba9d2494d2a6a73207882bc55555dce4643782c33390ac37e3 not found: ID does not exist" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.726102 4982 scope.go:117] "RemoveContainer" containerID="ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c" Feb 24 16:39:06 crc kubenswrapper[4982]: E0224 16:39:06.726325 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c\": container with ID starting with ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c not found: ID does not exist" containerID="ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.726346 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c"} err="failed to get container status \"ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c\": rpc error: code = NotFound desc = could not find container \"ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c\": container with ID starting with ea291dadd6dc129850f9d5306d6fab3c704e0370006ff9bbac4b6302a6e6b76c not found: ID does not exist" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.778144 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e9dd965-1448-4801-9871-b6d949a7e1e7/mysql-bootstrap/0.log" Feb 24 16:39:06 crc kubenswrapper[4982]: I0224 16:39:06.789692 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2e9dd965-1448-4801-9871-b6d949a7e1e7/galera/0.log" Feb 24 16:39:07 crc kubenswrapper[4982]: I0224 16:39:07.018712 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_546fb62a-ffa6-4067-b267-ea1ff18ee76e/openstackclient/0.log" Feb 24 16:39:07 crc kubenswrapper[4982]: I0224 16:39:07.135928 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xf5bb_6846e57a-0d17-4fd1-b470-5923690ad622/openstack-network-exporter/0.log" Feb 24 16:39:07 crc kubenswrapper[4982]: I0224 16:39:07.158259 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" path="/var/lib/kubelet/pods/5cabc55b-125c-43cd-8981-c9c448e7a16e/volumes" Feb 24 16:39:07 crc kubenswrapper[4982]: I0224 16:39:07.468895 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsdm4_8534c002-8446-4b80-ae93-6d529c59d1df/ovsdb-server-init/0.log" Feb 24 16:39:07 crc kubenswrapper[4982]: I0224 16:39:07.698228 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsdm4_8534c002-8446-4b80-ae93-6d529c59d1df/ovsdb-server-init/0.log" Feb 24 16:39:07 crc kubenswrapper[4982]: I0224 16:39:07.718066 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsdm4_8534c002-8446-4b80-ae93-6d529c59d1df/ovs-vswitchd/0.log" Feb 24 16:39:07 crc kubenswrapper[4982]: I0224 16:39:07.832378 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsdm4_8534c002-8446-4b80-ae93-6d529c59d1df/ovsdb-server/0.log" Feb 24 16:39:07 crc kubenswrapper[4982]: I0224 16:39:07.946672 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xcvbd_2c21939f-82d2-4553-acbd-b570e4d1527c/ovn-controller/0.log" Feb 24 16:39:08 crc kubenswrapper[4982]: I0224 16:39:08.135049 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aa8b9e96-e44a-4a46-87c6-0a473fc97e22/nova-metadata-metadata/0.log" Feb 24 16:39:08 crc kubenswrapper[4982]: I0224 16:39:08.159857 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-srvk7_1646ea82-1708-4dbe-b6af-7cc0c9fe35b1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:08 crc kubenswrapper[4982]: I0224 16:39:08.392765 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_86cf4035-92fa-4d4d-9ce7-ca961da212c2/openstack-network-exporter/0.log" Feb 24 16:39:08 crc kubenswrapper[4982]: I0224 16:39:08.414902 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_86cf4035-92fa-4d4d-9ce7-ca961da212c2/ovn-northd/0.log" Feb 24 16:39:08 crc kubenswrapper[4982]: I0224 16:39:08.595945 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_83806fad-ace2-4023-9023-88d534d78650/ovsdbserver-nb/0.log" Feb 24 16:39:08 crc kubenswrapper[4982]: I0224 16:39:08.620097 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_83806fad-ace2-4023-9023-88d534d78650/openstack-network-exporter/0.log" Feb 24 16:39:08 crc kubenswrapper[4982]: I0224 16:39:08.794731 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_43d8feff-fc48-4fcc-86f8-ce96094eded1/openstack-network-exporter/0.log" Feb 24 16:39:08 crc kubenswrapper[4982]: I0224 16:39:08.800290 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_43d8feff-fc48-4fcc-86f8-ce96094eded1/ovsdbserver-sb/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.183353 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/init-config-reloader/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.187194 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6557c4cbb4-n6w8z_26e32ece-d925-472d-91cd-db19b7bbb9ed/placement-api/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.237536 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6557c4cbb4-n6w8z_26e32ece-d925-472d-91cd-db19b7bbb9ed/placement-log/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.378359 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/init-config-reloader/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.455140 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/config-reloader/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.470458 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/prometheus/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.491011 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4f4cf82-cc11-498c-a168-ca862bfcd361/thanos-sidecar/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.699205 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fc18fffb-2c78-4097-8145-143bf44b11dc/setup-container/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.931189 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fc18fffb-2c78-4097-8145-143bf44b11dc/setup-container/0.log" Feb 24 16:39:09 crc kubenswrapper[4982]: I0224 16:39:09.951803 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fc18fffb-2c78-4097-8145-143bf44b11dc/rabbitmq/0.log" Feb 24 16:39:10 crc kubenswrapper[4982]: I0224 16:39:10.019946 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec0c9d67-9dca-4bd7-bd58-fa6185479916/setup-container/0.log" Feb 24 16:39:10 crc kubenswrapper[4982]: I0224 16:39:10.279955 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec0c9d67-9dca-4bd7-bd58-fa6185479916/setup-container/0.log" Feb 24 16:39:10 crc kubenswrapper[4982]: I0224 16:39:10.282510 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ec0c9d67-9dca-4bd7-bd58-fa6185479916/rabbitmq/0.log" Feb 24 16:39:10 crc kubenswrapper[4982]: I0224 16:39:10.337993 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_482245c3-03a8-4890-a48d-b234c5c78c3a/setup-container/0.log" Feb 24 16:39:10 crc kubenswrapper[4982]: I0224 16:39:10.655182 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_482245c3-03a8-4890-a48d-b234c5c78c3a/setup-container/0.log" Feb 24 16:39:10 crc kubenswrapper[4982]: I0224 16:39:10.682028 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b68b4733-09b0-4fff-b032-f3339306a04d/setup-container/0.log" Feb 24 16:39:10 crc kubenswrapper[4982]: I0224 16:39:10.691547 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_482245c3-03a8-4890-a48d-b234c5c78c3a/rabbitmq/0.log" Feb 24 16:39:11 crc kubenswrapper[4982]: I0224 16:39:11.216985 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b68b4733-09b0-4fff-b032-f3339306a04d/setup-container/0.log" Feb 24 16:39:11 crc kubenswrapper[4982]: I0224 16:39:11.219804 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_b68b4733-09b0-4fff-b032-f3339306a04d/rabbitmq/0.log" Feb 24 16:39:11 crc kubenswrapper[4982]: I0224 16:39:11.225910 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-dwksb_905ac17f-4256-40f9-b638-454713515dc3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:11 crc kubenswrapper[4982]: I0224 16:39:11.445482 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7g4bk_f0dc1e67-72c6-406f-ae09-eb8089da0840/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:11 crc kubenswrapper[4982]: I0224 16:39:11.573873 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pqcnh_f3a743f1-a8e5-4df7-a11d-60606242903e/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:11 crc kubenswrapper[4982]: I0224 16:39:11.773754 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-rlds8_d53e94d7-cbbe-429c-8b0e-98bb41fb3dda/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:11 crc kubenswrapper[4982]: I0224 16:39:11.867343 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jb627_71c06248-ae2a-4b15-9774-c3f18e1e61cb/ssh-known-hosts-edpm-deployment/0.log" Feb 24 16:39:12 crc kubenswrapper[4982]: I0224 16:39:12.140462 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68cc98f8f7-kjwtc_c89910f5-6c21-4f91-a07f-5b17503b3882/proxy-server/0.log" Feb 24 16:39:12 crc kubenswrapper[4982]: I0224 16:39:12.274062 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68cc98f8f7-kjwtc_c89910f5-6c21-4f91-a07f-5b17503b3882/proxy-httpd/0.log" Feb 24 16:39:12 crc kubenswrapper[4982]: I0224 16:39:12.590761 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cwcwd_e5321445-9e2b-44c7-9975-2bfe929ead53/swift-ring-rebalance/0.log" Feb 24 16:39:12 crc kubenswrapper[4982]: I0224 16:39:12.644172 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/account-auditor/0.log" Feb 24 16:39:12 crc kubenswrapper[4982]: I0224 16:39:12.712860 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/account-reaper/0.log" Feb 24 16:39:12 crc kubenswrapper[4982]: I0224 16:39:12.854603 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/account-replicator/0.log" Feb 24 16:39:12 crc kubenswrapper[4982]: I0224 16:39:12.869512 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/account-server/0.log" Feb 24 16:39:12 crc kubenswrapper[4982]: I0224 16:39:12.891339 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/container-auditor/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.040620 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/container-replicator/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.097616 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/container-server/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.110576 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/container-updater/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.185993 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-auditor/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.339792 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-expirer/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.342082 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-server/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.357145 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-replicator/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.440152 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/object-updater/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.550664 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/swift-recon-cron/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.603271 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b6daa16f-c9d9-465a-8d00-711f5ef84326/rsync/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.770440 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wpjqh_d376d1ae-0d9e-457a-97c3-dce655164119/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:13 crc kubenswrapper[4982]: I0224 16:39:13.881996 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-rc6p6_eb3d6bb3-611c-4c2a-bb6b-7a3dca94b9fd/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:14 crc kubenswrapper[4982]: I0224 16:39:14.148415 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_92ce340a-a0e5-4ab6-984a-17b5b02bfa0a/test-operator-logs-container/0.log" Feb 24 16:39:14 crc kubenswrapper[4982]: I0224 16:39:14.316896 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dfvmk_514b9bdd-6644-4170-bd57-2c6b1073b9cb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 24 16:39:14 crc kubenswrapper[4982]: I0224 16:39:14.925676 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2f07bb24-e52f-4fcc-b7f5-92a4a5e9b3d9/tempest-tests-tempest-tests-runner/0.log" Feb 24 16:39:21 crc kubenswrapper[4982]: I0224 16:39:21.127063 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_01f84659-7499-464a-9476-fddfca0dec8a/memcached/0.log" Feb 24 16:39:43 crc kubenswrapper[4982]: I0224 16:39:43.867492 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/util/0.log" Feb 24 16:39:44 crc kubenswrapper[4982]: I0224 16:39:44.022109 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/util/0.log" Feb 24 16:39:44 crc kubenswrapper[4982]: I0224 16:39:44.065917 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/pull/0.log" Feb 24 16:39:44 crc kubenswrapper[4982]: I0224 16:39:44.085855 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/pull/0.log" Feb 24 16:39:44 crc kubenswrapper[4982]: I0224 16:39:44.241052 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/util/0.log" Feb 24 16:39:44 crc kubenswrapper[4982]: I0224 16:39:44.256789 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/pull/0.log" Feb 24 16:39:44 crc kubenswrapper[4982]: I0224 16:39:44.288391 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0d601b1457cf6343d944ca339ab0782737039759b755cb529cadcbc6b4mfzxv_c2092b9c-4328-4fe0-976e-787c1f38d7bb/extract/0.log" Feb 24 16:39:44 crc kubenswrapper[4982]: I0224 16:39:44.759633 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-rqq7v_b0f19215-2346-4a5a-8b4a-30f19af5db6c/manager/0.log" Feb 24 16:39:45 crc kubenswrapper[4982]: I0224 16:39:45.077128 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-8dxfx_e07053fe-da4d-437b-b884-659d18acc903/manager/0.log" Feb 24 16:39:45 crc kubenswrapper[4982]: I0224 16:39:45.465543 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-4fbcm_84e2fa7b-8efb-4e72-b6a4-42b10ab15984/manager/0.log" Feb 24 16:39:45 crc kubenswrapper[4982]: I0224 16:39:45.575380 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-sl785_7da568d1-374c-4687-8724-ceee9b3857a7/manager/0.log" Feb 24 16:39:46 crc kubenswrapper[4982]: I0224 16:39:46.141724 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-f5stb_5da22390-3c90-4096-8f6b-ac0f8feb4f46/manager/0.log" Feb 24 16:39:46 crc kubenswrapper[4982]: I0224 16:39:46.366872 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-pcpml_7dbd2798-1deb-4014-9bad-8446f47f49e8/manager/0.log" Feb 24 16:39:46 crc kubenswrapper[4982]: I0224 16:39:46.793088 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-bqhfw_c863f339-9142-4edc-b547-9bf0fd0d64bc/manager/0.log" Feb 24 16:39:46 crc kubenswrapper[4982]: I0224 16:39:46.855403 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-xk8mt_cdca3167-b0ff-41e3-8802-02d92f829aff/manager/0.log" Feb 24 16:39:47 crc kubenswrapper[4982]: I0224 16:39:47.145482 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-zmxj9_5602df8b-a253-42b4-8b7d-93a3a793fa2a/manager/0.log" Feb 24 16:39:47 crc kubenswrapper[4982]: I0224 16:39:47.234090 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-fwqhl_12f6eea3-aefd-485f-a582-af40549cefa0/manager/0.log" Feb 24 16:39:47 crc kubenswrapper[4982]: I0224 16:39:47.466424 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-l8xls_5bdfbdfb-c57b-4d3e-9a8f-9b93b18fc6f7/manager/0.log" Feb 24 16:39:47 crc kubenswrapper[4982]: I0224 16:39:47.641227 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-247rw_d6146e6e-9a66-43aa-803c-df072ec31d11/manager/0.log" Feb 24 16:39:47 crc kubenswrapper[4982]: I0224 16:39:47.690924 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-rgfzx_c0967978-25a6-416a-81be-1153d5f5f74b/manager/0.log" Feb 24 16:39:47 crc kubenswrapper[4982]: I0224 16:39:47.861998 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c7st96_c4924244-1803-429b-9c50-8a5c33b1f1b6/manager/0.log" Feb 24 16:39:48 crc kubenswrapper[4982]: I0224 16:39:48.006812 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f8d897576-j27pk_26238893-6a15-42a9-ae76-3c1c9aa798ee/operator/0.log" Feb 24 16:39:48 crc kubenswrapper[4982]: I0224 16:39:48.217625 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-t7nqk_8c774d84-399c-417b-a926-981419902625/registry-server/0.log" Feb 24 16:39:48 crc kubenswrapper[4982]: I0224 16:39:48.446620 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-pmh6c_d7687408-a30d-42c8-826f-759659e87262/manager/0.log" Feb 24 16:39:48 crc kubenswrapper[4982]: I0224 16:39:48.536957 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-j2q85_2ed106e6-c770-4724-a803-29b4d1b74b6b/manager/0.log" Feb 24 16:39:48 crc kubenswrapper[4982]: I0224 16:39:48.671873 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-w8g8f_f03d42ea-ad31-451a-99a7-c1ecc595f924/operator/0.log" Feb 24 16:39:48 crc kubenswrapper[4982]: I0224 16:39:48.969066 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-xzh2k_8670907a-5fad-4602-8578-5eb1a19d1b44/manager/0.log" Feb 24 16:39:49 crc kubenswrapper[4982]: I0224 16:39:49.214146 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-qhlv4_95e748d2-45c9-4279-b1b4-9a0d18dce523/manager/0.log" Feb 24 16:39:49 crc kubenswrapper[4982]: I0224 16:39:49.474473 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-pjtlt_c45b9d77-86d6-4763-b9aa-44549e04016a/manager/0.log" Feb 24 16:39:49 crc kubenswrapper[4982]: I0224 16:39:49.540142 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6c7fcb66df-f9pl4_d4d8baf6-e8b2-4a05-b73e-ca563c3bb172/manager/0.log" Feb 24 16:39:50 crc kubenswrapper[4982]: I0224 16:39:50.055082 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58cc7d798f-pqcz2_ee761700-7a4b-4f96-8f99-31c55ed51962/manager/0.log" Feb 24 16:39:56 crc kubenswrapper[4982]: I0224 16:39:56.204700 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-xdgp8_b5bb21b9-5878-4960-9bd6-f46b48419f59/manager/0.log" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.156691 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532520-pd6tl"] Feb 24 16:40:00 crc kubenswrapper[4982]: E0224 16:40:00.158688 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerName="registry-server" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.158768 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerName="registry-server" Feb 24 16:40:00 crc kubenswrapper[4982]: E0224 16:40:00.158849 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerName="extract-content" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.158901 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerName="extract-content" Feb 24 16:40:00 crc kubenswrapper[4982]: E0224 16:40:00.158966 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerName="extract-utilities" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.159026 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerName="extract-utilities" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.159298 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cabc55b-125c-43cd-8981-c9c448e7a16e" containerName="registry-server" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.160231 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.162559 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.163053 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.163314 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.176760 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqrjk\" (UniqueName: \"kubernetes.io/projected/bc3a9b30-4c42-4dc0-9785-5406c44c31df-kube-api-access-rqrjk\") pod \"auto-csr-approver-29532520-pd6tl\" (UID: \"bc3a9b30-4c42-4dc0-9785-5406c44c31df\") " pod="openshift-infra/auto-csr-approver-29532520-pd6tl" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.189034 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532520-pd6tl"] Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.279843 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqrjk\" (UniqueName: \"kubernetes.io/projected/bc3a9b30-4c42-4dc0-9785-5406c44c31df-kube-api-access-rqrjk\") pod \"auto-csr-approver-29532520-pd6tl\" (UID: \"bc3a9b30-4c42-4dc0-9785-5406c44c31df\") " pod="openshift-infra/auto-csr-approver-29532520-pd6tl" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.299605 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqrjk\" (UniqueName: \"kubernetes.io/projected/bc3a9b30-4c42-4dc0-9785-5406c44c31df-kube-api-access-rqrjk\") pod \"auto-csr-approver-29532520-pd6tl\" (UID: \"bc3a9b30-4c42-4dc0-9785-5406c44c31df\") " pod="openshift-infra/auto-csr-approver-29532520-pd6tl" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.489734 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" Feb 24 16:40:00 crc kubenswrapper[4982]: I0224 16:40:00.990912 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532520-pd6tl"] Feb 24 16:40:01 crc kubenswrapper[4982]: I0224 16:40:01.184406 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" event={"ID":"bc3a9b30-4c42-4dc0-9785-5406c44c31df","Type":"ContainerStarted","Data":"83951058a45242e436a7f44f61c06dfe7f816e7ccfc877f366c309871e9f3f24"} Feb 24 16:40:03 crc kubenswrapper[4982]: I0224 16:40:03.232863 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" event={"ID":"bc3a9b30-4c42-4dc0-9785-5406c44c31df","Type":"ContainerStarted","Data":"d5468bcf1438040a0172907a46c308b07cba77d25721f7830f717d7a20496cd7"} Feb 24 16:40:03 crc kubenswrapper[4982]: I0224 16:40:03.265819 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" podStartSLOduration=2.023185467 podStartE2EDuration="3.265803062s" podCreationTimestamp="2026-02-24 16:40:00 +0000 UTC" firstStartedPulling="2026-02-24 16:40:01.002763572 +0000 UTC m=+6662.621822065" lastFinishedPulling="2026-02-24 16:40:02.245381137 +0000 UTC m=+6663.864439660" observedRunningTime="2026-02-24 16:40:03.261926767 +0000 UTC m=+6664.880985260" watchObservedRunningTime="2026-02-24 16:40:03.265803062 +0000 UTC m=+6664.884861555" Feb 24 16:40:04 crc kubenswrapper[4982]: I0224 16:40:04.318284 4982 generic.go:334] "Generic (PLEG): container finished" podID="bc3a9b30-4c42-4dc0-9785-5406c44c31df" containerID="d5468bcf1438040a0172907a46c308b07cba77d25721f7830f717d7a20496cd7" exitCode=0 Feb 24 16:40:04 crc kubenswrapper[4982]: I0224 16:40:04.332582 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" event={"ID":"bc3a9b30-4c42-4dc0-9785-5406c44c31df","Type":"ContainerDied","Data":"d5468bcf1438040a0172907a46c308b07cba77d25721f7830f717d7a20496cd7"} Feb 24 16:40:05 crc kubenswrapper[4982]: I0224 16:40:05.898738 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" Feb 24 16:40:06 crc kubenswrapper[4982]: I0224 16:40:06.053523 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqrjk\" (UniqueName: \"kubernetes.io/projected/bc3a9b30-4c42-4dc0-9785-5406c44c31df-kube-api-access-rqrjk\") pod \"bc3a9b30-4c42-4dc0-9785-5406c44c31df\" (UID: \"bc3a9b30-4c42-4dc0-9785-5406c44c31df\") " Feb 24 16:40:06 crc kubenswrapper[4982]: I0224 16:40:06.066577 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3a9b30-4c42-4dc0-9785-5406c44c31df-kube-api-access-rqrjk" (OuterVolumeSpecName: "kube-api-access-rqrjk") pod "bc3a9b30-4c42-4dc0-9785-5406c44c31df" (UID: "bc3a9b30-4c42-4dc0-9785-5406c44c31df"). InnerVolumeSpecName "kube-api-access-rqrjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:40:06 crc kubenswrapper[4982]: I0224 16:40:06.156137 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqrjk\" (UniqueName: \"kubernetes.io/projected/bc3a9b30-4c42-4dc0-9785-5406c44c31df-kube-api-access-rqrjk\") on node \"crc\" DevicePath \"\"" Feb 24 16:40:06 crc kubenswrapper[4982]: I0224 16:40:06.348479 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532514-mpjxp"] Feb 24 16:40:06 crc kubenswrapper[4982]: I0224 16:40:06.353468 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" event={"ID":"bc3a9b30-4c42-4dc0-9785-5406c44c31df","Type":"ContainerDied","Data":"83951058a45242e436a7f44f61c06dfe7f816e7ccfc877f366c309871e9f3f24"} Feb 24 16:40:06 crc kubenswrapper[4982]: I0224 16:40:06.353525 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83951058a45242e436a7f44f61c06dfe7f816e7ccfc877f366c309871e9f3f24" Feb 24 16:40:06 crc kubenswrapper[4982]: I0224 16:40:06.353538 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532520-pd6tl" Feb 24 16:40:06 crc kubenswrapper[4982]: I0224 16:40:06.360911 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532514-mpjxp"] Feb 24 16:40:07 crc kubenswrapper[4982]: I0224 16:40:07.165330 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9889e25d-5027-4f9b-b4b9-eeffa0fac2af" path="/var/lib/kubelet/pods/9889e25d-5027-4f9b-b4b9-eeffa0fac2af/volumes" Feb 24 16:40:11 crc kubenswrapper[4982]: I0224 16:40:11.914205 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5tww7"] Feb 24 16:40:11 crc kubenswrapper[4982]: E0224 16:40:11.916024 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3a9b30-4c42-4dc0-9785-5406c44c31df" containerName="oc" Feb 24 16:40:11 crc kubenswrapper[4982]: I0224 16:40:11.916053 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3a9b30-4c42-4dc0-9785-5406c44c31df" containerName="oc" Feb 24 16:40:11 crc kubenswrapper[4982]: I0224 16:40:11.916596 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3a9b30-4c42-4dc0-9785-5406c44c31df" containerName="oc" Feb 24 16:40:11 crc kubenswrapper[4982]: I0224 16:40:11.919874 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:11 crc kubenswrapper[4982]: I0224 16:40:11.929603 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tww7"] Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.008788 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-catalog-content\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.009130 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hmx\" (UniqueName: \"kubernetes.io/projected/573379b8-226a-4a5d-b9ca-86acc10da15b-kube-api-access-j6hmx\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.009369 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-utilities\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.111812 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-catalog-content\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.111951 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6hmx\" (UniqueName: \"kubernetes.io/projected/573379b8-226a-4a5d-b9ca-86acc10da15b-kube-api-access-j6hmx\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.112000 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-utilities\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.112599 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-utilities\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.112677 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-catalog-content\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.141208 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6hmx\" (UniqueName: \"kubernetes.io/projected/573379b8-226a-4a5d-b9ca-86acc10da15b-kube-api-access-j6hmx\") pod \"community-operators-5tww7\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.239443 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:12 crc kubenswrapper[4982]: I0224 16:40:12.736861 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tww7"] Feb 24 16:40:13 crc kubenswrapper[4982]: I0224 16:40:13.446437 4982 generic.go:334] "Generic (PLEG): container finished" podID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerID="d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e" exitCode=0 Feb 24 16:40:13 crc kubenswrapper[4982]: I0224 16:40:13.446560 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tww7" event={"ID":"573379b8-226a-4a5d-b9ca-86acc10da15b","Type":"ContainerDied","Data":"d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e"} Feb 24 16:40:13 crc kubenswrapper[4982]: I0224 16:40:13.447749 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tww7" event={"ID":"573379b8-226a-4a5d-b9ca-86acc10da15b","Type":"ContainerStarted","Data":"19e0a1def366d07380cead8afb71ddf814fa4555f56f7c1ef527c3ffaa155db5"} Feb 24 16:40:13 crc kubenswrapper[4982]: I0224 16:40:13.562092 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-b6dkb_8dd5c785-b167-4b52-8c16-4eea0fcb5685/control-plane-machine-set-operator/0.log" Feb 24 16:40:13 crc kubenswrapper[4982]: I0224 16:40:13.726485 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5sdrm_1b0d00bf-0cb6-4fa2-9561-edafa4a10082/machine-api-operator/0.log" Feb 24 16:40:13 crc kubenswrapper[4982]: I0224 16:40:13.747333 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5sdrm_1b0d00bf-0cb6-4fa2-9561-edafa4a10082/kube-rbac-proxy/0.log" Feb 24 16:40:14 crc kubenswrapper[4982]: I0224 16:40:14.460705 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tww7" event={"ID":"573379b8-226a-4a5d-b9ca-86acc10da15b","Type":"ContainerStarted","Data":"414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334"} Feb 24 16:40:17 crc kubenswrapper[4982]: I0224 16:40:17.494449 4982 generic.go:334] "Generic (PLEG): container finished" podID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerID="414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334" exitCode=0 Feb 24 16:40:17 crc kubenswrapper[4982]: I0224 16:40:17.494519 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tww7" event={"ID":"573379b8-226a-4a5d-b9ca-86acc10da15b","Type":"ContainerDied","Data":"414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334"} Feb 24 16:40:18 crc kubenswrapper[4982]: I0224 16:40:18.518936 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tww7" event={"ID":"573379b8-226a-4a5d-b9ca-86acc10da15b","Type":"ContainerStarted","Data":"65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8"} Feb 24 16:40:18 crc kubenswrapper[4982]: I0224 16:40:18.559673 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5tww7" podStartSLOduration=3.126940334 podStartE2EDuration="7.559652533s" podCreationTimestamp="2026-02-24 16:40:11 +0000 UTC" firstStartedPulling="2026-02-24 16:40:13.448678711 +0000 UTC m=+6675.067737204" lastFinishedPulling="2026-02-24 16:40:17.88139091 +0000 UTC m=+6679.500449403" observedRunningTime="2026-02-24 16:40:18.543859003 +0000 UTC m=+6680.162917526" watchObservedRunningTime="2026-02-24 16:40:18.559652533 +0000 UTC m=+6680.178711046" Feb 24 16:40:22 crc kubenswrapper[4982]: I0224 16:40:22.240531 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:22 crc kubenswrapper[4982]: I0224 16:40:22.241218 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:23 crc kubenswrapper[4982]: I0224 16:40:23.312636 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5tww7" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="registry-server" probeResult="failure" output=< Feb 24 16:40:23 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:40:23 crc kubenswrapper[4982]: > Feb 24 16:40:29 crc kubenswrapper[4982]: I0224 16:40:29.544374 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-ll6lf_d543e947-1fd6-4253-84c8-5dd81a835ba4/cert-manager-controller/0.log" Feb 24 16:40:29 crc kubenswrapper[4982]: I0224 16:40:29.697957 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mpsc5_906e9aec-8f03-4230-8b2b-01459a8c2fcc/cert-manager-cainjector/0.log" Feb 24 16:40:29 crc kubenswrapper[4982]: I0224 16:40:29.786949 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-4zfz6_8422a1e6-92b5-4b34-a360-004609a25ac0/cert-manager-webhook/0.log" Feb 24 16:40:32 crc kubenswrapper[4982]: I0224 16:40:32.310297 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:32 crc kubenswrapper[4982]: I0224 16:40:32.389238 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:32 crc kubenswrapper[4982]: I0224 16:40:32.551551 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5tww7"] Feb 24 16:40:33 crc kubenswrapper[4982]: I0224 16:40:33.694707 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5tww7" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="registry-server" containerID="cri-o://65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8" gracePeriod=2 Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.258804 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.344004 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-utilities\") pod \"573379b8-226a-4a5d-b9ca-86acc10da15b\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.344446 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6hmx\" (UniqueName: \"kubernetes.io/projected/573379b8-226a-4a5d-b9ca-86acc10da15b-kube-api-access-j6hmx\") pod \"573379b8-226a-4a5d-b9ca-86acc10da15b\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.344568 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-catalog-content\") pod \"573379b8-226a-4a5d-b9ca-86acc10da15b\" (UID: \"573379b8-226a-4a5d-b9ca-86acc10da15b\") " Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.345726 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-utilities" (OuterVolumeSpecName: "utilities") pod "573379b8-226a-4a5d-b9ca-86acc10da15b" (UID: "573379b8-226a-4a5d-b9ca-86acc10da15b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.354636 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573379b8-226a-4a5d-b9ca-86acc10da15b-kube-api-access-j6hmx" (OuterVolumeSpecName: "kube-api-access-j6hmx") pod "573379b8-226a-4a5d-b9ca-86acc10da15b" (UID: "573379b8-226a-4a5d-b9ca-86acc10da15b"). InnerVolumeSpecName "kube-api-access-j6hmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.404630 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "573379b8-226a-4a5d-b9ca-86acc10da15b" (UID: "573379b8-226a-4a5d-b9ca-86acc10da15b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.448359 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6hmx\" (UniqueName: \"kubernetes.io/projected/573379b8-226a-4a5d-b9ca-86acc10da15b-kube-api-access-j6hmx\") on node \"crc\" DevicePath \"\"" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.448391 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.448401 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573379b8-226a-4a5d-b9ca-86acc10da15b-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.707956 4982 generic.go:334] "Generic (PLEG): container finished" podID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerID="65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8" exitCode=0 Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.707999 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tww7" event={"ID":"573379b8-226a-4a5d-b9ca-86acc10da15b","Type":"ContainerDied","Data":"65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8"} Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.708043 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tww7" event={"ID":"573379b8-226a-4a5d-b9ca-86acc10da15b","Type":"ContainerDied","Data":"19e0a1def366d07380cead8afb71ddf814fa4555f56f7c1ef527c3ffaa155db5"} Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.708073 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tww7" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.708080 4982 scope.go:117] "RemoveContainer" containerID="65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.762541 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5tww7"] Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.771894 4982 scope.go:117] "RemoveContainer" containerID="414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.775281 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5tww7"] Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.804010 4982 scope.go:117] "RemoveContainer" containerID="d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.863702 4982 scope.go:117] "RemoveContainer" containerID="65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8" Feb 24 16:40:34 crc kubenswrapper[4982]: E0224 16:40:34.864717 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8\": container with ID starting with 65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8 not found: ID does not exist" containerID="65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.864785 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8"} err="failed to get container status \"65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8\": rpc error: code = NotFound desc = could not find container \"65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8\": container with ID starting with 65adf28d7e3ec9810822925b2065530bcf10519c71eb3e60a57992be450b53c8 not found: ID does not exist" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.864831 4982 scope.go:117] "RemoveContainer" containerID="414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334" Feb 24 16:40:34 crc kubenswrapper[4982]: E0224 16:40:34.865465 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334\": container with ID starting with 414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334 not found: ID does not exist" containerID="414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.865545 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334"} err="failed to get container status \"414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334\": rpc error: code = NotFound desc = could not find container \"414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334\": container with ID starting with 414e9a8158413b6243c08cf9fee9e8f1c02f784932ca86edb3f67dfbce636334 not found: ID does not exist" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.865576 4982 scope.go:117] "RemoveContainer" containerID="d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e" Feb 24 16:40:34 crc kubenswrapper[4982]: E0224 16:40:34.866136 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e\": container with ID starting with d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e not found: ID does not exist" containerID="d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e" Feb 24 16:40:34 crc kubenswrapper[4982]: I0224 16:40:34.866177 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e"} err="failed to get container status \"d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e\": rpc error: code = NotFound desc = could not find container \"d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e\": container with ID starting with d3841ca46919a4744657955fe5eec047689a2f6d8ea9fcb5194f6b62f70f5e1e not found: ID does not exist" Feb 24 16:40:35 crc kubenswrapper[4982]: I0224 16:40:35.158001 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" path="/var/lib/kubelet/pods/573379b8-226a-4a5d-b9ca-86acc10da15b/volumes" Feb 24 16:40:44 crc kubenswrapper[4982]: I0224 16:40:44.001492 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-kqblm_f0fec11a-acd6-4eb3-9019-2ecdd41eccf3/nmstate-console-plugin/0.log" Feb 24 16:40:44 crc kubenswrapper[4982]: I0224 16:40:44.200767 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8k954_42eaa5bc-682b-40c1-ace7-12acd0a45032/nmstate-handler/0.log" Feb 24 16:40:44 crc kubenswrapper[4982]: I0224 16:40:44.250035 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7rt8q_1e0416f6-ebc9-4a01-a69a-904aab8b4cbb/kube-rbac-proxy/0.log" Feb 24 16:40:44 crc kubenswrapper[4982]: I0224 16:40:44.327487 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7rt8q_1e0416f6-ebc9-4a01-a69a-904aab8b4cbb/nmstate-metrics/0.log" Feb 24 16:40:44 crc kubenswrapper[4982]: I0224 16:40:44.424007 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-lj9wc_9d9e0bf3-ed22-416c-b672-8df43d3014c0/nmstate-operator/0.log" Feb 24 16:40:44 crc kubenswrapper[4982]: I0224 16:40:44.535158 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-vfvxq_ef265381-af0c-4642-93cb-075344e3650c/nmstate-webhook/0.log" Feb 24 16:40:57 crc kubenswrapper[4982]: I0224 16:40:57.458549 4982 scope.go:117] "RemoveContainer" containerID="f67e44eb86d825d92abf1432022e2c604100684c6442694c43750b4574df14c8" Feb 24 16:40:58 crc kubenswrapper[4982]: I0224 16:40:58.724314 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4467cd99-kv4ps_9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c/kube-rbac-proxy/0.log" Feb 24 16:40:58 crc kubenswrapper[4982]: I0224 16:40:58.761956 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4467cd99-kv4ps_9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c/manager/0.log" Feb 24 16:41:08 crc kubenswrapper[4982]: I0224 16:41:08.738977 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:41:08 crc kubenswrapper[4982]: I0224 16:41:08.739608 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:41:12 crc kubenswrapper[4982]: I0224 16:41:12.365718 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-wjv8g_5d19a6f9-587e-42fc-8dd5-1a363bac4c09/prometheus-operator/0.log" Feb 24 16:41:12 crc kubenswrapper[4982]: I0224 16:41:12.560080 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb_14ead058-d4ed-4e55-9632-a5e2f571b469/prometheus-operator-admission-webhook/0.log" Feb 24 16:41:12 crc kubenswrapper[4982]: I0224 16:41:12.636726 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc_ca2d2e72-fd73-4ad0-8d81-718235c7f891/prometheus-operator-admission-webhook/0.log" Feb 24 16:41:12 crc kubenswrapper[4982]: I0224 16:41:12.750210 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9bmsh_8186e569-67ca-4273-9de2-130ffd7dcf09/operator/0.log" Feb 24 16:41:12 crc kubenswrapper[4982]: I0224 16:41:12.813405 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m6fxj_10e410a1-e886-451e-9cfc-40f6812a4d0d/observability-ui-dashboards/0.log" Feb 24 16:41:12 crc kubenswrapper[4982]: I0224 16:41:12.938256 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4hqxq_ef281ee3-4742-4dd3-947f-32c7f039f5ec/perses-operator/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.052273 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-d86g4_dd40714b-1f28-413d-bfec-b2c20b09e12f/cluster-logging-operator/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.192643 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-vt6tk_fd36302e-5b75-4a73-ae39-e4a8e58f2682/collector/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.277976 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_8eb67e33-3f41-4046-930e-babb8b65f3cc/loki-compactor/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.380240 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-p6kh9_8fa63326-7a48-4c93-bad4-6ddb3d1d0731/loki-distributor/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.450092 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76767f4456-gmbbt_8a3d5174-0a86-43bc-bc05-a974c01aef1b/opa/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.458052 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76767f4456-gmbbt_8a3d5174-0a86-43bc-bc05-a974c01aef1b/gateway/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.589561 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76767f4456-hfv4z_f653b7f2-9a99-4426-b855-beb8dde56230/gateway/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.649033 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76767f4456-hfv4z_f653b7f2-9a99-4426-b855-beb8dde56230/opa/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.729705 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_3053cc27-e7fb-460e-84f8-92085a6aa8e5/loki-index-gateway/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.887723 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_f4e9d226-da8e-46e8-b378-5aafba527e2c/loki-ingester/0.log" Feb 24 16:41:29 crc kubenswrapper[4982]: I0224 16:41:29.956797 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-cldqm_3c5894c9-7c05-4c3f-9ae1-a75f77f7f37a/loki-querier/0.log" Feb 24 16:41:30 crc kubenswrapper[4982]: I0224 16:41:30.058479 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-jr57w_6e368d67-d0ff-4b7e-b9f4-fe631e4db4e4/loki-query-frontend/0.log" Feb 24 16:41:38 crc kubenswrapper[4982]: I0224 16:41:38.738679 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:41:38 crc kubenswrapper[4982]: I0224 16:41:38.739098 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:41:45 crc kubenswrapper[4982]: I0224 16:41:45.564051 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-flpt4_59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad/kube-rbac-proxy/0.log" Feb 24 16:41:45 crc kubenswrapper[4982]: I0224 16:41:45.651739 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-flpt4_59c1e10b-2985-4ab4-b3d5-6a1ef1ba85ad/controller/0.log" Feb 24 16:41:45 crc kubenswrapper[4982]: I0224 16:41:45.778520 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-frr-files/0.log" Feb 24 16:41:45 crc kubenswrapper[4982]: I0224 16:41:45.929258 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-metrics/0.log" Feb 24 16:41:45 crc kubenswrapper[4982]: I0224 16:41:45.937211 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-reloader/0.log" Feb 24 16:41:45 crc kubenswrapper[4982]: I0224 16:41:45.952200 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-reloader/0.log" Feb 24 16:41:45 crc kubenswrapper[4982]: I0224 16:41:45.962975 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-frr-files/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.144272 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-frr-files/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.155653 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-reloader/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.183866 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-metrics/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.211264 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-metrics/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.343463 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-reloader/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.347623 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-frr-files/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.357767 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/cp-metrics/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.421491 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/controller/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.503271 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/frr-metrics/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.550832 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/kube-rbac-proxy/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.652891 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/kube-rbac-proxy-frr/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.728042 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/reloader/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.888438 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-tszx8_dd5e975a-6b06-4ba5-a549-63843e3d9f41/frr-k8s-webhook-server/0.log" Feb 24 16:41:46 crc kubenswrapper[4982]: I0224 16:41:46.998144 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-686d7d6557-xttc8_bce5dbab-56c7-4132-aa32-b13ea1d81ada/manager/0.log" Feb 24 16:41:47 crc kubenswrapper[4982]: I0224 16:41:47.184259 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7474599d7f-769v9_eb7df4a4-76b1-411c-97c1-2e7ad64dfdeb/webhook-server/0.log" Feb 24 16:41:47 crc kubenswrapper[4982]: I0224 16:41:47.334032 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z6xx7_c5421b10-e070-4cdd-a7b1-060d75642b50/kube-rbac-proxy/0.log" Feb 24 16:41:47 crc kubenswrapper[4982]: I0224 16:41:47.978634 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-z6xx7_c5421b10-e070-4cdd-a7b1-060d75642b50/speaker/0.log" Feb 24 16:41:48 crc kubenswrapper[4982]: I0224 16:41:48.955165 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5rzcg_1d717783-cc63-4772-82fd-b8865e471134/frr/0.log" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.167080 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532522-2zqnv"] Feb 24 16:42:00 crc kubenswrapper[4982]: E0224 16:42:00.168382 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="extract-content" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.168486 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="extract-content" Feb 24 16:42:00 crc kubenswrapper[4982]: E0224 16:42:00.168528 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="extract-utilities" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.168537 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="extract-utilities" Feb 24 16:42:00 crc kubenswrapper[4982]: E0224 16:42:00.168589 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="registry-server" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.168599 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="registry-server" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.168876 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="573379b8-226a-4a5d-b9ca-86acc10da15b" containerName="registry-server" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.169832 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.172637 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.172753 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.173141 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.182072 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532522-2zqnv"] Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.263741 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzz27\" (UniqueName: \"kubernetes.io/projected/1a902e48-0351-46eb-bc3a-68b0b9e572be-kube-api-access-qzz27\") pod \"auto-csr-approver-29532522-2zqnv\" (UID: \"1a902e48-0351-46eb-bc3a-68b0b9e572be\") " pod="openshift-infra/auto-csr-approver-29532522-2zqnv" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.365689 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzz27\" (UniqueName: \"kubernetes.io/projected/1a902e48-0351-46eb-bc3a-68b0b9e572be-kube-api-access-qzz27\") pod \"auto-csr-approver-29532522-2zqnv\" (UID: \"1a902e48-0351-46eb-bc3a-68b0b9e572be\") " pod="openshift-infra/auto-csr-approver-29532522-2zqnv" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.393418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzz27\" (UniqueName: \"kubernetes.io/projected/1a902e48-0351-46eb-bc3a-68b0b9e572be-kube-api-access-qzz27\") pod \"auto-csr-approver-29532522-2zqnv\" (UID: \"1a902e48-0351-46eb-bc3a-68b0b9e572be\") " pod="openshift-infra/auto-csr-approver-29532522-2zqnv" Feb 24 16:42:00 crc kubenswrapper[4982]: I0224 16:42:00.494548 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" Feb 24 16:42:01 crc kubenswrapper[4982]: I0224 16:42:01.010215 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532522-2zqnv"] Feb 24 16:42:01 crc kubenswrapper[4982]: I0224 16:42:01.013552 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:42:01 crc kubenswrapper[4982]: I0224 16:42:01.816110 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" event={"ID":"1a902e48-0351-46eb-bc3a-68b0b9e572be","Type":"ContainerStarted","Data":"dee738392493f12aaa8802acc569385fbe0711052b80cf949f557f04c7eec0ff"} Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.074307 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/util/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.263315 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/pull/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.285183 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/util/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.300964 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/pull/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.525280 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/extract/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.534236 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/util/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.546026 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19pks8q_59aa3bc9-c53a-4844-8572-79a2dc711e95/pull/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.755677 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/util/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.924814 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/pull/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.950307 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/util/0.log" Feb 24 16:42:03 crc kubenswrapper[4982]: I0224 16:42:03.965128 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/pull/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.168441 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/util/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.178107 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/pull/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.210370 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085m6qb_ce4f631a-0a9c-4f06-9b04-1b4240f0900d/extract/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.364907 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/util/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.496391 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/util/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.504475 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/pull/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.538774 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/pull/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.689479 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/util/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.734570 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/pull/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.734997 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213msxr8_f003884e-c2a7-466f-8d03-b0f7bbd2254d/extract/0.log" Feb 24 16:42:04 crc kubenswrapper[4982]: I0224 16:42:04.879335 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-utilities/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.058615 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-utilities/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.075981 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-content/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.083475 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-content/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.266172 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-utilities/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.273045 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/extract-content/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.507373 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-utilities/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.724630 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-content/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.725783 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-utilities/0.log" Feb 24 16:42:05 crc kubenswrapper[4982]: I0224 16:42:05.814412 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-content/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.067051 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-content/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.073395 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/extract-utilities/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.162243 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fzc7v_6d08aa9d-495f-4693-86d2-240687656356/registry-server/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.314360 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/util/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.499115 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/util/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.530802 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/pull/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.644482 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/pull/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.820505 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/pull/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.829221 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/util/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.864959 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" event={"ID":"1a902e48-0351-46eb-bc3a-68b0b9e572be","Type":"ContainerStarted","Data":"e006ba631f2d776c28fd5115aa4614801c8636f4bb3db475de93d91320739024"} Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.886468 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989lzjwp_11c92c38-3fb2-443e-bcfe-887679226802/extract/0.log" Feb 24 16:42:06 crc kubenswrapper[4982]: I0224 16:42:06.895645 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" podStartSLOduration=1.972778081 podStartE2EDuration="6.895495408s" podCreationTimestamp="2026-02-24 16:42:00 +0000 UTC" firstStartedPulling="2026-02-24 16:42:01.009201462 +0000 UTC m=+6782.628259965" lastFinishedPulling="2026-02-24 16:42:05.931918799 +0000 UTC m=+6787.550977292" observedRunningTime="2026-02-24 16:42:06.886921505 +0000 UTC m=+6788.505980008" watchObservedRunningTime="2026-02-24 16:42:06.895495408 +0000 UTC m=+6788.514553901" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.042490 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f82l9_6c58b3d9-b12e-4b69-9dcd-0c5b6b96f00f/registry-server/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.069103 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/util/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.211666 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/util/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.221811 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/pull/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.242910 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/pull/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.406641 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/pull/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.419151 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-w8m2v_cfc42f28-cff7-46a9-a4cb-1421f8e7e61e/marketplace-operator/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.441184 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/util/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.457053 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca5ddnm_44ef5dc7-0127-4741-bbbb-afa5033ede1a/extract/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.612990 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-utilities/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.769325 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-content/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.781333 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-utilities/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.789713 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-content/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.876088 4982 generic.go:334] "Generic (PLEG): container finished" podID="1a902e48-0351-46eb-bc3a-68b0b9e572be" containerID="e006ba631f2d776c28fd5115aa4614801c8636f4bb3db475de93d91320739024" exitCode=0 Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.876133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" event={"ID":"1a902e48-0351-46eb-bc3a-68b0b9e572be","Type":"ContainerDied","Data":"e006ba631f2d776c28fd5115aa4614801c8636f4bb3db475de93d91320739024"} Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.951441 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-utilities/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.965258 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/extract-content/0.log" Feb 24 16:42:07 crc kubenswrapper[4982]: I0224 16:42:07.991323 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-utilities/0.log" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.197967 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-content/0.log" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.201869 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-h5bts_94c42884-373a-42f1-91f4-1949f4a8fbe8/registry-server/0.log" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.226603 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-content/0.log" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.254399 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-utilities/0.log" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.391029 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-utilities/0.log" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.430976 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/extract-content/0.log" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.738293 4982 patch_prober.go:28] interesting pod/machine-config-daemon-b79sf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.738523 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.738637 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.739942 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50"} pod="openshift-machine-config-operator/machine-config-daemon-b79sf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.740097 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerName="machine-config-daemon" containerID="cri-o://457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" gracePeriod=600 Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.751958 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rnfz2_2ac0cb26-a32d-4377-afe7-33e056fd5f4d/registry-server/0.log" Feb 24 16:42:08 crc kubenswrapper[4982]: E0224 16:42:08.861126 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.888337 4982 generic.go:334] "Generic (PLEG): container finished" podID="bf688571-4e47-42da-80b4-0d54580ce6c8" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" exitCode=0 Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.888414 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerDied","Data":"457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50"} Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.888464 4982 scope.go:117] "RemoveContainer" containerID="8ace8224ab7a5a8617509839dcd2d4d3ee3d53ce07edc0028d24bce3f1f97c1b" Feb 24 16:42:08 crc kubenswrapper[4982]: I0224 16:42:08.889522 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:42:08 crc kubenswrapper[4982]: E0224 16:42:08.889878 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.270616 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.370194 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzz27\" (UniqueName: \"kubernetes.io/projected/1a902e48-0351-46eb-bc3a-68b0b9e572be-kube-api-access-qzz27\") pod \"1a902e48-0351-46eb-bc3a-68b0b9e572be\" (UID: \"1a902e48-0351-46eb-bc3a-68b0b9e572be\") " Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.377362 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a902e48-0351-46eb-bc3a-68b0b9e572be-kube-api-access-qzz27" (OuterVolumeSpecName: "kube-api-access-qzz27") pod "1a902e48-0351-46eb-bc3a-68b0b9e572be" (UID: "1a902e48-0351-46eb-bc3a-68b0b9e572be"). InnerVolumeSpecName "kube-api-access-qzz27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.473631 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzz27\" (UniqueName: \"kubernetes.io/projected/1a902e48-0351-46eb-bc3a-68b0b9e572be-kube-api-access-qzz27\") on node \"crc\" DevicePath \"\"" Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.902420 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" event={"ID":"1a902e48-0351-46eb-bc3a-68b0b9e572be","Type":"ContainerDied","Data":"dee738392493f12aaa8802acc569385fbe0711052b80cf949f557f04c7eec0ff"} Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.902450 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532522-2zqnv" Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.902469 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee738392493f12aaa8802acc569385fbe0711052b80cf949f557f04c7eec0ff" Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.955625 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532516-j999s"] Feb 24 16:42:09 crc kubenswrapper[4982]: I0224 16:42:09.966877 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532516-j999s"] Feb 24 16:42:11 crc kubenswrapper[4982]: I0224 16:42:11.159667 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03fa432e-9a3c-4f3b-9314-71aba5302e06" path="/var/lib/kubelet/pods/03fa432e-9a3c-4f3b-9314-71aba5302e06/volumes" Feb 24 16:42:22 crc kubenswrapper[4982]: I0224 16:42:22.345821 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bd6968d8f-dsjhb_14ead058-d4ed-4e55-9632-a5e2f571b469/prometheus-operator-admission-webhook/0.log" Feb 24 16:42:22 crc kubenswrapper[4982]: I0224 16:42:22.347823 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5bd6968d8f-mrctc_ca2d2e72-fd73-4ad0-8d81-718235c7f891/prometheus-operator-admission-webhook/0.log" Feb 24 16:42:22 crc kubenswrapper[4982]: I0224 16:42:22.352832 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-wjv8g_5d19a6f9-587e-42fc-8dd5-1a363bac4c09/prometheus-operator/0.log" Feb 24 16:42:22 crc kubenswrapper[4982]: I0224 16:42:22.555388 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m6fxj_10e410a1-e886-451e-9cfc-40f6812a4d0d/observability-ui-dashboards/0.log" Feb 24 16:42:22 crc kubenswrapper[4982]: I0224 16:42:22.563025 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9bmsh_8186e569-67ca-4273-9de2-130ffd7dcf09/operator/0.log" Feb 24 16:42:22 crc kubenswrapper[4982]: I0224 16:42:22.585243 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-4hqxq_ef281ee3-4742-4dd3-947f-32c7f039f5ec/perses-operator/0.log" Feb 24 16:42:23 crc kubenswrapper[4982]: I0224 16:42:23.145776 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:42:23 crc kubenswrapper[4982]: E0224 16:42:23.146925 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:42:37 crc kubenswrapper[4982]: I0224 16:42:37.146007 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:42:37 crc kubenswrapper[4982]: E0224 16:42:37.149088 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:42:37 crc kubenswrapper[4982]: I0224 16:42:37.205895 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4467cd99-kv4ps_9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c/kube-rbac-proxy/0.log" Feb 24 16:42:37 crc kubenswrapper[4982]: I0224 16:42:37.227635 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6d4467cd99-kv4ps_9be12fef-bcde-4b1e-bbc5-2bcb1f839d7c/manager/0.log" Feb 24 16:42:51 crc kubenswrapper[4982]: I0224 16:42:51.145842 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:42:51 crc kubenswrapper[4982]: E0224 16:42:51.146790 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:42:57 crc kubenswrapper[4982]: I0224 16:42:57.642563 4982 scope.go:117] "RemoveContainer" containerID="c9bceda3adc7db40727ff2026d2de8ea84b5ec97bcef632446379c31faccb358" Feb 24 16:43:04 crc kubenswrapper[4982]: I0224 16:43:04.146552 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:43:04 crc kubenswrapper[4982]: E0224 16:43:04.147204 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:43:17 crc kubenswrapper[4982]: I0224 16:43:17.145814 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:43:17 crc kubenswrapper[4982]: E0224 16:43:17.146764 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:43:31 crc kubenswrapper[4982]: I0224 16:43:31.146984 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:43:31 crc kubenswrapper[4982]: E0224 16:43:31.147729 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:43:45 crc kubenswrapper[4982]: I0224 16:43:45.148874 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:43:45 crc kubenswrapper[4982]: E0224 16:43:45.150768 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:43:57 crc kubenswrapper[4982]: I0224 16:43:57.762662 4982 scope.go:117] "RemoveContainer" containerID="e60555db9b5a077ad6bb8b0a2a1cc29adf0e095c883bfa285f9a7abfe6b86ba8" Feb 24 16:43:59 crc kubenswrapper[4982]: I0224 16:43:59.913193 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:43:59 crc kubenswrapper[4982]: E0224 16:43:59.914827 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.159741 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532524-dsvng"] Feb 24 16:44:00 crc kubenswrapper[4982]: E0224 16:44:00.160151 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a902e48-0351-46eb-bc3a-68b0b9e572be" containerName="oc" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.160163 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a902e48-0351-46eb-bc3a-68b0b9e572be" containerName="oc" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.160421 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a902e48-0351-46eb-bc3a-68b0b9e572be" containerName="oc" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.161167 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532524-dsvng" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.164397 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.164645 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.165046 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.194944 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532524-dsvng"] Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.296719 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdtz\" (UniqueName: \"kubernetes.io/projected/07843f6d-85cf-4c83-bf38-2ad660f9ad93-kube-api-access-rbdtz\") pod \"auto-csr-approver-29532524-dsvng\" (UID: \"07843f6d-85cf-4c83-bf38-2ad660f9ad93\") " pod="openshift-infra/auto-csr-approver-29532524-dsvng" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.399647 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbdtz\" (UniqueName: \"kubernetes.io/projected/07843f6d-85cf-4c83-bf38-2ad660f9ad93-kube-api-access-rbdtz\") pod \"auto-csr-approver-29532524-dsvng\" (UID: \"07843f6d-85cf-4c83-bf38-2ad660f9ad93\") " pod="openshift-infra/auto-csr-approver-29532524-dsvng" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.431880 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbdtz\" (UniqueName: \"kubernetes.io/projected/07843f6d-85cf-4c83-bf38-2ad660f9ad93-kube-api-access-rbdtz\") pod \"auto-csr-approver-29532524-dsvng\" (UID: \"07843f6d-85cf-4c83-bf38-2ad660f9ad93\") " pod="openshift-infra/auto-csr-approver-29532524-dsvng" Feb 24 16:44:00 crc kubenswrapper[4982]: I0224 16:44:00.479861 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532524-dsvng" Feb 24 16:44:01 crc kubenswrapper[4982]: W0224 16:44:01.161111 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07843f6d_85cf_4c83_bf38_2ad660f9ad93.slice/crio-3f69dd8bb490bea26e74e00b0ea9072ebd91fda1e2d9f486e10d72d257cfd277 WatchSource:0}: Error finding container 3f69dd8bb490bea26e74e00b0ea9072ebd91fda1e2d9f486e10d72d257cfd277: Status 404 returned error can't find the container with id 3f69dd8bb490bea26e74e00b0ea9072ebd91fda1e2d9f486e10d72d257cfd277 Feb 24 16:44:01 crc kubenswrapper[4982]: I0224 16:44:01.166705 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532524-dsvng"] Feb 24 16:44:01 crc kubenswrapper[4982]: I0224 16:44:01.954068 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532524-dsvng" event={"ID":"07843f6d-85cf-4c83-bf38-2ad660f9ad93","Type":"ContainerStarted","Data":"3f69dd8bb490bea26e74e00b0ea9072ebd91fda1e2d9f486e10d72d257cfd277"} Feb 24 16:44:02 crc kubenswrapper[4982]: I0224 16:44:02.963460 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532524-dsvng" event={"ID":"07843f6d-85cf-4c83-bf38-2ad660f9ad93","Type":"ContainerStarted","Data":"8ea014902956ca7c36361dffcca590dcb778b7667c2e1b89145910638f2d6f01"} Feb 24 16:44:02 crc kubenswrapper[4982]: I0224 16:44:02.981062 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532524-dsvng" podStartSLOduration=1.8291093059999999 podStartE2EDuration="2.981044642s" podCreationTimestamp="2026-02-24 16:44:00 +0000 UTC" firstStartedPulling="2026-02-24 16:44:01.168598126 +0000 UTC m=+6902.787656619" lastFinishedPulling="2026-02-24 16:44:02.320533432 +0000 UTC m=+6903.939591955" observedRunningTime="2026-02-24 16:44:02.978195874 +0000 UTC m=+6904.597254377" watchObservedRunningTime="2026-02-24 16:44:02.981044642 +0000 UTC m=+6904.600103135" Feb 24 16:44:03 crc kubenswrapper[4982]: I0224 16:44:03.974851 4982 generic.go:334] "Generic (PLEG): container finished" podID="07843f6d-85cf-4c83-bf38-2ad660f9ad93" containerID="8ea014902956ca7c36361dffcca590dcb778b7667c2e1b89145910638f2d6f01" exitCode=0 Feb 24 16:44:03 crc kubenswrapper[4982]: I0224 16:44:03.974897 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532524-dsvng" event={"ID":"07843f6d-85cf-4c83-bf38-2ad660f9ad93","Type":"ContainerDied","Data":"8ea014902956ca7c36361dffcca590dcb778b7667c2e1b89145910638f2d6f01"} Feb 24 16:44:05 crc kubenswrapper[4982]: I0224 16:44:05.495009 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532524-dsvng" Feb 24 16:44:05 crc kubenswrapper[4982]: I0224 16:44:05.656108 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbdtz\" (UniqueName: \"kubernetes.io/projected/07843f6d-85cf-4c83-bf38-2ad660f9ad93-kube-api-access-rbdtz\") pod \"07843f6d-85cf-4c83-bf38-2ad660f9ad93\" (UID: \"07843f6d-85cf-4c83-bf38-2ad660f9ad93\") " Feb 24 16:44:05 crc kubenswrapper[4982]: I0224 16:44:05.665684 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07843f6d-85cf-4c83-bf38-2ad660f9ad93-kube-api-access-rbdtz" (OuterVolumeSpecName: "kube-api-access-rbdtz") pod "07843f6d-85cf-4c83-bf38-2ad660f9ad93" (UID: "07843f6d-85cf-4c83-bf38-2ad660f9ad93"). InnerVolumeSpecName "kube-api-access-rbdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:44:05 crc kubenswrapper[4982]: I0224 16:44:05.758990 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbdtz\" (UniqueName: \"kubernetes.io/projected/07843f6d-85cf-4c83-bf38-2ad660f9ad93-kube-api-access-rbdtz\") on node \"crc\" DevicePath \"\"" Feb 24 16:44:06 crc kubenswrapper[4982]: I0224 16:44:06.008707 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532524-dsvng" event={"ID":"07843f6d-85cf-4c83-bf38-2ad660f9ad93","Type":"ContainerDied","Data":"3f69dd8bb490bea26e74e00b0ea9072ebd91fda1e2d9f486e10d72d257cfd277"} Feb 24 16:44:06 crc kubenswrapper[4982]: I0224 16:44:06.008769 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f69dd8bb490bea26e74e00b0ea9072ebd91fda1e2d9f486e10d72d257cfd277" Feb 24 16:44:06 crc kubenswrapper[4982]: I0224 16:44:06.008882 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532524-dsvng" Feb 24 16:44:06 crc kubenswrapper[4982]: I0224 16:44:06.094113 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532518-rsrcr"] Feb 24 16:44:06 crc kubenswrapper[4982]: I0224 16:44:06.108446 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532518-rsrcr"] Feb 24 16:44:07 crc kubenswrapper[4982]: I0224 16:44:07.171162 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588664ef-cf4a-4df0-9bf9-be60560886c3" path="/var/lib/kubelet/pods/588664ef-cf4a-4df0-9bf9-be60560886c3/volumes" Feb 24 16:44:11 crc kubenswrapper[4982]: I0224 16:44:11.145790 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:44:11 crc kubenswrapper[4982]: E0224 16:44:11.146682 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.163725 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lmd4z"] Feb 24 16:44:17 crc kubenswrapper[4982]: E0224 16:44:17.165234 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07843f6d-85cf-4c83-bf38-2ad660f9ad93" containerName="oc" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.165258 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="07843f6d-85cf-4c83-bf38-2ad660f9ad93" containerName="oc" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.165676 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="07843f6d-85cf-4c83-bf38-2ad660f9ad93" containerName="oc" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.167953 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmd4z"] Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.168064 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.288368 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-catalog-content\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.288817 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-utilities\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.289066 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtv2j\" (UniqueName: \"kubernetes.io/projected/7e5f41db-e709-48a1-b207-3d270689169c-kube-api-access-wtv2j\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.391269 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtv2j\" (UniqueName: \"kubernetes.io/projected/7e5f41db-e709-48a1-b207-3d270689169c-kube-api-access-wtv2j\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.391540 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-catalog-content\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.391731 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-utilities\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.392122 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-catalog-content\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.392308 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-utilities\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.417424 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtv2j\" (UniqueName: \"kubernetes.io/projected/7e5f41db-e709-48a1-b207-3d270689169c-kube-api-access-wtv2j\") pod \"certified-operators-lmd4z\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:17 crc kubenswrapper[4982]: I0224 16:44:17.497703 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:18 crc kubenswrapper[4982]: I0224 16:44:18.007600 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmd4z"] Feb 24 16:44:18 crc kubenswrapper[4982]: I0224 16:44:18.182073 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmd4z" event={"ID":"7e5f41db-e709-48a1-b207-3d270689169c","Type":"ContainerStarted","Data":"cb98c7407b727a9f5bce8a9b13bbd9e294b2215a0e37356c075ae36933056123"} Feb 24 16:44:19 crc kubenswrapper[4982]: I0224 16:44:19.196833 4982 generic.go:334] "Generic (PLEG): container finished" podID="7e5f41db-e709-48a1-b207-3d270689169c" containerID="aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d" exitCode=0 Feb 24 16:44:19 crc kubenswrapper[4982]: I0224 16:44:19.198397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmd4z" event={"ID":"7e5f41db-e709-48a1-b207-3d270689169c","Type":"ContainerDied","Data":"aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d"} Feb 24 16:44:20 crc kubenswrapper[4982]: I0224 16:44:20.213879 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmd4z" event={"ID":"7e5f41db-e709-48a1-b207-3d270689169c","Type":"ContainerStarted","Data":"c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b"} Feb 24 16:44:22 crc kubenswrapper[4982]: I0224 16:44:22.239410 4982 generic.go:334] "Generic (PLEG): container finished" podID="7e5f41db-e709-48a1-b207-3d270689169c" containerID="c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b" exitCode=0 Feb 24 16:44:22 crc kubenswrapper[4982]: I0224 16:44:22.239832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmd4z" event={"ID":"7e5f41db-e709-48a1-b207-3d270689169c","Type":"ContainerDied","Data":"c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b"} Feb 24 16:44:23 crc kubenswrapper[4982]: I0224 16:44:23.259600 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmd4z" event={"ID":"7e5f41db-e709-48a1-b207-3d270689169c","Type":"ContainerStarted","Data":"cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb"} Feb 24 16:44:23 crc kubenswrapper[4982]: I0224 16:44:23.295973 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lmd4z" podStartSLOduration=2.87142749 podStartE2EDuration="6.295953036s" podCreationTimestamp="2026-02-24 16:44:17 +0000 UTC" firstStartedPulling="2026-02-24 16:44:19.201365311 +0000 UTC m=+6920.820423824" lastFinishedPulling="2026-02-24 16:44:22.625890877 +0000 UTC m=+6924.244949370" observedRunningTime="2026-02-24 16:44:23.287129196 +0000 UTC m=+6924.906187699" watchObservedRunningTime="2026-02-24 16:44:23.295953036 +0000 UTC m=+6924.915011529" Feb 24 16:44:25 crc kubenswrapper[4982]: I0224 16:44:25.145724 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:44:25 crc kubenswrapper[4982]: E0224 16:44:25.146849 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:44:27 crc kubenswrapper[4982]: I0224 16:44:27.498164 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:27 crc kubenswrapper[4982]: I0224 16:44:27.498604 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:28 crc kubenswrapper[4982]: I0224 16:44:28.574679 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lmd4z" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="registry-server" probeResult="failure" output=< Feb 24 16:44:28 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Feb 24 16:44:28 crc kubenswrapper[4982]: > Feb 24 16:44:37 crc kubenswrapper[4982]: I0224 16:44:37.146308 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:44:37 crc kubenswrapper[4982]: E0224 16:44:37.147295 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:44:37 crc kubenswrapper[4982]: I0224 16:44:37.549682 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:37 crc kubenswrapper[4982]: I0224 16:44:37.611967 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:37 crc kubenswrapper[4982]: I0224 16:44:37.783619 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmd4z"] Feb 24 16:44:39 crc kubenswrapper[4982]: I0224 16:44:39.451409 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lmd4z" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="registry-server" containerID="cri-o://cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb" gracePeriod=2 Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.168655 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.194519 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtv2j\" (UniqueName: \"kubernetes.io/projected/7e5f41db-e709-48a1-b207-3d270689169c-kube-api-access-wtv2j\") pod \"7e5f41db-e709-48a1-b207-3d270689169c\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.194573 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-utilities\") pod \"7e5f41db-e709-48a1-b207-3d270689169c\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.195623 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-utilities" (OuterVolumeSpecName: "utilities") pod "7e5f41db-e709-48a1-b207-3d270689169c" (UID: "7e5f41db-e709-48a1-b207-3d270689169c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.196066 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-catalog-content\") pod \"7e5f41db-e709-48a1-b207-3d270689169c\" (UID: \"7e5f41db-e709-48a1-b207-3d270689169c\") " Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.200568 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5f41db-e709-48a1-b207-3d270689169c-kube-api-access-wtv2j" (OuterVolumeSpecName: "kube-api-access-wtv2j") pod "7e5f41db-e709-48a1-b207-3d270689169c" (UID: "7e5f41db-e709-48a1-b207-3d270689169c"). InnerVolumeSpecName "kube-api-access-wtv2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.201148 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtv2j\" (UniqueName: \"kubernetes.io/projected/7e5f41db-e709-48a1-b207-3d270689169c-kube-api-access-wtv2j\") on node \"crc\" DevicePath \"\"" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.201177 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.249485 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e5f41db-e709-48a1-b207-3d270689169c" (UID: "7e5f41db-e709-48a1-b207-3d270689169c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.304101 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e5f41db-e709-48a1-b207-3d270689169c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.467091 4982 generic.go:334] "Generic (PLEG): container finished" podID="7e5f41db-e709-48a1-b207-3d270689169c" containerID="cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb" exitCode=0 Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.467164 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmd4z" event={"ID":"7e5f41db-e709-48a1-b207-3d270689169c","Type":"ContainerDied","Data":"cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb"} Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.468287 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmd4z" event={"ID":"7e5f41db-e709-48a1-b207-3d270689169c","Type":"ContainerDied","Data":"cb98c7407b727a9f5bce8a9b13bbd9e294b2215a0e37356c075ae36933056123"} Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.467289 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmd4z" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.468322 4982 scope.go:117] "RemoveContainer" containerID="cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.509131 4982 scope.go:117] "RemoveContainer" containerID="c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.512749 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmd4z"] Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.526057 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lmd4z"] Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.560620 4982 scope.go:117] "RemoveContainer" containerID="aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.606656 4982 scope.go:117] "RemoveContainer" containerID="cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb" Feb 24 16:44:40 crc kubenswrapper[4982]: E0224 16:44:40.607315 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb\": container with ID starting with cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb not found: ID does not exist" containerID="cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.607383 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb"} err="failed to get container status \"cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb\": rpc error: code = NotFound desc = could not find container \"cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb\": container with ID starting with cf550b5257cce08bdea47144caa8e66449ced69f17c1ca03ecd905cc8fe826cb not found: ID does not exist" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.607426 4982 scope.go:117] "RemoveContainer" containerID="c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b" Feb 24 16:44:40 crc kubenswrapper[4982]: E0224 16:44:40.607847 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b\": container with ID starting with c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b not found: ID does not exist" containerID="c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.607887 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b"} err="failed to get container status \"c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b\": rpc error: code = NotFound desc = could not find container \"c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b\": container with ID starting with c45b54c30f15deb3f13f501af3ffc69d3e225a11b1249b9caf1fbfbfc91f970b not found: ID does not exist" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.607916 4982 scope.go:117] "RemoveContainer" containerID="aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d" Feb 24 16:44:40 crc kubenswrapper[4982]: E0224 16:44:40.608389 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d\": container with ID starting with aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d not found: ID does not exist" containerID="aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d" Feb 24 16:44:40 crc kubenswrapper[4982]: I0224 16:44:40.608416 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d"} err="failed to get container status \"aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d\": rpc error: code = NotFound desc = could not find container \"aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d\": container with ID starting with aff986f8254071699abb6a2c31311ab9618fa334acebfa4127ec329401b9cd3d not found: ID does not exist" Feb 24 16:44:41 crc kubenswrapper[4982]: I0224 16:44:41.161121 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e5f41db-e709-48a1-b207-3d270689169c" path="/var/lib/kubelet/pods/7e5f41db-e709-48a1-b207-3d270689169c/volumes" Feb 24 16:44:44 crc kubenswrapper[4982]: I0224 16:44:44.526018 4982 generic.go:334] "Generic (PLEG): container finished" podID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerID="6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638" exitCode=0 Feb 24 16:44:44 crc kubenswrapper[4982]: I0224 16:44:44.526138 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" event={"ID":"26ecbcc1-7cd8-453b-96ba-6fea013ae275","Type":"ContainerDied","Data":"6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638"} Feb 24 16:44:44 crc kubenswrapper[4982]: I0224 16:44:44.530055 4982 scope.go:117] "RemoveContainer" containerID="6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638" Feb 24 16:44:45 crc kubenswrapper[4982]: I0224 16:44:45.151850 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bf2x_must-gather-4tsr9_26ecbcc1-7cd8-453b-96ba-6fea013ae275/gather/0.log" Feb 24 16:44:49 crc kubenswrapper[4982]: I0224 16:44:49.160747 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:44:49 crc kubenswrapper[4982]: E0224 16:44:49.161688 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:44:57 crc kubenswrapper[4982]: I0224 16:44:57.830797 4982 scope.go:117] "RemoveContainer" containerID="f12b1e0f6068a1c7f919758eb73621427d4468a4168f453084c09dd7b61172a6" Feb 24 16:44:57 crc kubenswrapper[4982]: I0224 16:44:57.912269 4982 scope.go:117] "RemoveContainer" containerID="271c6f1eddd8e057ed6e36bfb50fcdd11766d9a4f25e55167e737aa299802544" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.146244 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:45:00 crc kubenswrapper[4982]: E0224 16:45:00.147194 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.171221 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb"] Feb 24 16:45:00 crc kubenswrapper[4982]: E0224 16:45:00.171777 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="registry-server" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.171800 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="registry-server" Feb 24 16:45:00 crc kubenswrapper[4982]: E0224 16:45:00.171860 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="extract-content" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.171868 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="extract-content" Feb 24 16:45:00 crc kubenswrapper[4982]: E0224 16:45:00.171897 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="extract-utilities" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.171906 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="extract-utilities" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.172214 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e5f41db-e709-48a1-b207-3d270689169c" containerName="registry-server" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.173145 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.177263 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.185037 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.192313 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb"] Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.257496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf507bd-f092-4732-b07e-6b418c83fe12-config-volume\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.257688 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf507bd-f092-4732-b07e-6b418c83fe12-secret-volume\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.257731 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4g77\" (UniqueName: \"kubernetes.io/projected/bbf507bd-f092-4732-b07e-6b418c83fe12-kube-api-access-q4g77\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.360755 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf507bd-f092-4732-b07e-6b418c83fe12-secret-volume\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.360889 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4g77\" (UniqueName: \"kubernetes.io/projected/bbf507bd-f092-4732-b07e-6b418c83fe12-kube-api-access-q4g77\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.360976 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf507bd-f092-4732-b07e-6b418c83fe12-config-volume\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.362642 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf507bd-f092-4732-b07e-6b418c83fe12-config-volume\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.369234 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf507bd-f092-4732-b07e-6b418c83fe12-secret-volume\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.378716 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4g77\" (UniqueName: \"kubernetes.io/projected/bbf507bd-f092-4732-b07e-6b418c83fe12-kube-api-access-q4g77\") pod \"collect-profiles-29532525-zr5zb\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:00 crc kubenswrapper[4982]: I0224 16:45:00.498060 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:01 crc kubenswrapper[4982]: I0224 16:45:01.020903 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb"] Feb 24 16:45:01 crc kubenswrapper[4982]: I0224 16:45:01.755961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" event={"ID":"bbf507bd-f092-4732-b07e-6b418c83fe12","Type":"ContainerStarted","Data":"a4f1519e97afefd5fc0697037f22b0424e64ac766257dae73774efdf6aa940df"} Feb 24 16:45:01 crc kubenswrapper[4982]: I0224 16:45:01.756297 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" event={"ID":"bbf507bd-f092-4732-b07e-6b418c83fe12","Type":"ContainerStarted","Data":"2804e98e4f1587cbf86fbedb1cf20900b0e5ea5cbb60e2306c60321d666a8894"} Feb 24 16:45:01 crc kubenswrapper[4982]: I0224 16:45:01.774578 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" podStartSLOduration=1.774552117 podStartE2EDuration="1.774552117s" podCreationTimestamp="2026-02-24 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 16:45:01.767854605 +0000 UTC m=+6963.386913098" watchObservedRunningTime="2026-02-24 16:45:01.774552117 +0000 UTC m=+6963.393610620" Feb 24 16:45:02 crc kubenswrapper[4982]: I0224 16:45:02.782873 4982 generic.go:334] "Generic (PLEG): container finished" podID="bbf507bd-f092-4732-b07e-6b418c83fe12" containerID="a4f1519e97afefd5fc0697037f22b0424e64ac766257dae73774efdf6aa940df" exitCode=0 Feb 24 16:45:02 crc kubenswrapper[4982]: I0224 16:45:02.783201 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" event={"ID":"bbf507bd-f092-4732-b07e-6b418c83fe12","Type":"ContainerDied","Data":"a4f1519e97afefd5fc0697037f22b0424e64ac766257dae73774efdf6aa940df"} Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.201814 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6bf2x/must-gather-4tsr9"] Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.202182 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" podUID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerName="copy" containerID="cri-o://5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb" gracePeriod=2 Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.220453 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6bf2x/must-gather-4tsr9"] Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.729041 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bf2x_must-gather-4tsr9_26ecbcc1-7cd8-453b-96ba-6fea013ae275/copy/0.log" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.729849 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.756655 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/26ecbcc1-7cd8-453b-96ba-6fea013ae275-must-gather-output\") pod \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\" (UID: \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\") " Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.756846 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f5zf\" (UniqueName: \"kubernetes.io/projected/26ecbcc1-7cd8-453b-96ba-6fea013ae275-kube-api-access-8f5zf\") pod \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\" (UID: \"26ecbcc1-7cd8-453b-96ba-6fea013ae275\") " Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.765773 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ecbcc1-7cd8-453b-96ba-6fea013ae275-kube-api-access-8f5zf" (OuterVolumeSpecName: "kube-api-access-8f5zf") pod "26ecbcc1-7cd8-453b-96ba-6fea013ae275" (UID: "26ecbcc1-7cd8-453b-96ba-6fea013ae275"). InnerVolumeSpecName "kube-api-access-8f5zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.801213 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6bf2x_must-gather-4tsr9_26ecbcc1-7cd8-453b-96ba-6fea013ae275/copy/0.log" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.801591 4982 generic.go:334] "Generic (PLEG): container finished" podID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerID="5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb" exitCode=143 Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.801658 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6bf2x/must-gather-4tsr9" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.801686 4982 scope.go:117] "RemoveContainer" containerID="5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.857453 4982 scope.go:117] "RemoveContainer" containerID="6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.861948 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f5zf\" (UniqueName: \"kubernetes.io/projected/26ecbcc1-7cd8-453b-96ba-6fea013ae275-kube-api-access-8f5zf\") on node \"crc\" DevicePath \"\"" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.955976 4982 scope.go:117] "RemoveContainer" containerID="5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb" Feb 24 16:45:03 crc kubenswrapper[4982]: E0224 16:45:03.957041 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb\": container with ID starting with 5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb not found: ID does not exist" containerID="5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.957078 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb"} err="failed to get container status \"5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb\": rpc error: code = NotFound desc = could not find container \"5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb\": container with ID starting with 5622108fe24c5ddce86040e4e0d2315e5134d133dd06022aef23909550667ddb not found: ID does not exist" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.957106 4982 scope.go:117] "RemoveContainer" containerID="6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638" Feb 24 16:45:03 crc kubenswrapper[4982]: E0224 16:45:03.957482 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638\": container with ID starting with 6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638 not found: ID does not exist" containerID="6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.957527 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638"} err="failed to get container status \"6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638\": rpc error: code = NotFound desc = could not find container \"6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638\": container with ID starting with 6e4695e60763d63e9ea0b5bdd1cdefffcc13550eeafd335825e09bb071feb638 not found: ID does not exist" Feb 24 16:45:03 crc kubenswrapper[4982]: I0224 16:45:03.985146 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ecbcc1-7cd8-453b-96ba-6fea013ae275-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "26ecbcc1-7cd8-453b-96ba-6fea013ae275" (UID: "26ecbcc1-7cd8-453b-96ba-6fea013ae275"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.066628 4982 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/26ecbcc1-7cd8-453b-96ba-6fea013ae275-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.132571 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.171164 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4g77\" (UniqueName: \"kubernetes.io/projected/bbf507bd-f092-4732-b07e-6b418c83fe12-kube-api-access-q4g77\") pod \"bbf507bd-f092-4732-b07e-6b418c83fe12\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.173800 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf507bd-f092-4732-b07e-6b418c83fe12-secret-volume\") pod \"bbf507bd-f092-4732-b07e-6b418c83fe12\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.174957 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf507bd-f092-4732-b07e-6b418c83fe12-config-volume\") pod \"bbf507bd-f092-4732-b07e-6b418c83fe12\" (UID: \"bbf507bd-f092-4732-b07e-6b418c83fe12\") " Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.175565 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbf507bd-f092-4732-b07e-6b418c83fe12-config-volume" (OuterVolumeSpecName: "config-volume") pod "bbf507bd-f092-4732-b07e-6b418c83fe12" (UID: "bbf507bd-f092-4732-b07e-6b418c83fe12"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.176642 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbf507bd-f092-4732-b07e-6b418c83fe12-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.179247 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf507bd-f092-4732-b07e-6b418c83fe12-kube-api-access-q4g77" (OuterVolumeSpecName: "kube-api-access-q4g77") pod "bbf507bd-f092-4732-b07e-6b418c83fe12" (UID: "bbf507bd-f092-4732-b07e-6b418c83fe12"). InnerVolumeSpecName "kube-api-access-q4g77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.181458 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbf507bd-f092-4732-b07e-6b418c83fe12-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bbf507bd-f092-4732-b07e-6b418c83fe12" (UID: "bbf507bd-f092-4732-b07e-6b418c83fe12"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.279485 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4g77\" (UniqueName: \"kubernetes.io/projected/bbf507bd-f092-4732-b07e-6b418c83fe12-kube-api-access-q4g77\") on node \"crc\" DevicePath \"\"" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.279550 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbf507bd-f092-4732-b07e-6b418c83fe12-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.818112 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.818110 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29532525-zr5zb" event={"ID":"bbf507bd-f092-4732-b07e-6b418c83fe12","Type":"ContainerDied","Data":"2804e98e4f1587cbf86fbedb1cf20900b0e5ea5cbb60e2306c60321d666a8894"} Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.818314 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2804e98e4f1587cbf86fbedb1cf20900b0e5ea5cbb60e2306c60321d666a8894" Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.857817 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh"] Feb 24 16:45:04 crc kubenswrapper[4982]: I0224 16:45:04.872419 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29532480-5hfvh"] Feb 24 16:45:05 crc kubenswrapper[4982]: I0224 16:45:05.163785 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" path="/var/lib/kubelet/pods/26ecbcc1-7cd8-453b-96ba-6fea013ae275/volumes" Feb 24 16:45:05 crc kubenswrapper[4982]: I0224 16:45:05.169847 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3" path="/var/lib/kubelet/pods/c81ff75e-fdd8-4cc9-bec8-8b7e7ea14ea3/volumes" Feb 24 16:45:12 crc kubenswrapper[4982]: I0224 16:45:12.146179 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:45:12 crc kubenswrapper[4982]: E0224 16:45:12.146904 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:45:24 crc kubenswrapper[4982]: I0224 16:45:24.169329 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:45:24 crc kubenswrapper[4982]: E0224 16:45:24.181628 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:45:38 crc kubenswrapper[4982]: I0224 16:45:38.147204 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:45:38 crc kubenswrapper[4982]: E0224 16:45:38.148578 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:45:51 crc kubenswrapper[4982]: I0224 16:45:51.145790 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:45:51 crc kubenswrapper[4982]: E0224 16:45:51.146819 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:45:58 crc kubenswrapper[4982]: I0224 16:45:58.006460 4982 scope.go:117] "RemoveContainer" containerID="4024d0dc8573b0593e580da50527902e8b32c770490c99accdf35118b9a89524" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.162843 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532526-4qkv7"] Feb 24 16:46:00 crc kubenswrapper[4982]: E0224 16:46:00.165903 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerName="copy" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.166095 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerName="copy" Feb 24 16:46:00 crc kubenswrapper[4982]: E0224 16:46:00.166254 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerName="gather" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.166409 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerName="gather" Feb 24 16:46:00 crc kubenswrapper[4982]: E0224 16:46:00.166603 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf507bd-f092-4732-b07e-6b418c83fe12" containerName="collect-profiles" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.166733 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf507bd-f092-4732-b07e-6b418c83fe12" containerName="collect-profiles" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.167271 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerName="copy" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.167456 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf507bd-f092-4732-b07e-6b418c83fe12" containerName="collect-profiles" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.167790 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ecbcc1-7cd8-453b-96ba-6fea013ae275" containerName="gather" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.169280 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.173043 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.173387 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.175634 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.176689 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532526-4qkv7"] Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.348570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrvv\" (UniqueName: \"kubernetes.io/projected/8f6cd5db-f350-46b6-a239-0f92182868c5-kube-api-access-5vrvv\") pod \"auto-csr-approver-29532526-4qkv7\" (UID: \"8f6cd5db-f350-46b6-a239-0f92182868c5\") " pod="openshift-infra/auto-csr-approver-29532526-4qkv7" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.451466 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vrvv\" (UniqueName: \"kubernetes.io/projected/8f6cd5db-f350-46b6-a239-0f92182868c5-kube-api-access-5vrvv\") pod \"auto-csr-approver-29532526-4qkv7\" (UID: \"8f6cd5db-f350-46b6-a239-0f92182868c5\") " pod="openshift-infra/auto-csr-approver-29532526-4qkv7" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.472445 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vrvv\" (UniqueName: \"kubernetes.io/projected/8f6cd5db-f350-46b6-a239-0f92182868c5-kube-api-access-5vrvv\") pod \"auto-csr-approver-29532526-4qkv7\" (UID: \"8f6cd5db-f350-46b6-a239-0f92182868c5\") " pod="openshift-infra/auto-csr-approver-29532526-4qkv7" Feb 24 16:46:00 crc kubenswrapper[4982]: I0224 16:46:00.495385 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" Feb 24 16:46:01 crc kubenswrapper[4982]: I0224 16:46:01.092015 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532526-4qkv7"] Feb 24 16:46:01 crc kubenswrapper[4982]: I0224 16:46:01.614741 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" event={"ID":"8f6cd5db-f350-46b6-a239-0f92182868c5","Type":"ContainerStarted","Data":"1ce5755a4cb1e52fa64d60734dcfabae09391177606a9f4ea48913dc6ac37ac4"} Feb 24 16:46:02 crc kubenswrapper[4982]: I0224 16:46:02.146832 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:46:02 crc kubenswrapper[4982]: E0224 16:46:02.147674 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:46:03 crc kubenswrapper[4982]: I0224 16:46:03.649091 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" event={"ID":"8f6cd5db-f350-46b6-a239-0f92182868c5","Type":"ContainerStarted","Data":"717d226889bb324f1fd84ce029c1eeb2b4d2a34579d3f1f55646c71a82c233a3"} Feb 24 16:46:03 crc kubenswrapper[4982]: I0224 16:46:03.670231 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" podStartSLOduration=2.754177162 podStartE2EDuration="3.670213717s" podCreationTimestamp="2026-02-24 16:46:00 +0000 UTC" firstStartedPulling="2026-02-24 16:46:01.104827626 +0000 UTC m=+7022.723886159" lastFinishedPulling="2026-02-24 16:46:02.020864181 +0000 UTC m=+7023.639922714" observedRunningTime="2026-02-24 16:46:03.667916464 +0000 UTC m=+7025.286974967" watchObservedRunningTime="2026-02-24 16:46:03.670213717 +0000 UTC m=+7025.289272220" Feb 24 16:46:04 crc kubenswrapper[4982]: I0224 16:46:04.663132 4982 generic.go:334] "Generic (PLEG): container finished" podID="8f6cd5db-f350-46b6-a239-0f92182868c5" containerID="717d226889bb324f1fd84ce029c1eeb2b4d2a34579d3f1f55646c71a82c233a3" exitCode=0 Feb 24 16:46:04 crc kubenswrapper[4982]: I0224 16:46:04.663177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" event={"ID":"8f6cd5db-f350-46b6-a239-0f92182868c5","Type":"ContainerDied","Data":"717d226889bb324f1fd84ce029c1eeb2b4d2a34579d3f1f55646c71a82c233a3"} Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.187966 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.214934 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vrvv\" (UniqueName: \"kubernetes.io/projected/8f6cd5db-f350-46b6-a239-0f92182868c5-kube-api-access-5vrvv\") pod \"8f6cd5db-f350-46b6-a239-0f92182868c5\" (UID: \"8f6cd5db-f350-46b6-a239-0f92182868c5\") " Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.223002 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6cd5db-f350-46b6-a239-0f92182868c5-kube-api-access-5vrvv" (OuterVolumeSpecName: "kube-api-access-5vrvv") pod "8f6cd5db-f350-46b6-a239-0f92182868c5" (UID: "8f6cd5db-f350-46b6-a239-0f92182868c5"). InnerVolumeSpecName "kube-api-access-5vrvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.319289 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vrvv\" (UniqueName: \"kubernetes.io/projected/8f6cd5db-f350-46b6-a239-0f92182868c5-kube-api-access-5vrvv\") on node \"crc\" DevicePath \"\"" Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.692487 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" event={"ID":"8f6cd5db-f350-46b6-a239-0f92182868c5","Type":"ContainerDied","Data":"1ce5755a4cb1e52fa64d60734dcfabae09391177606a9f4ea48913dc6ac37ac4"} Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.692890 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce5755a4cb1e52fa64d60734dcfabae09391177606a9f4ea48913dc6ac37ac4" Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.692578 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532526-4qkv7" Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.751288 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532520-pd6tl"] Feb 24 16:46:06 crc kubenswrapper[4982]: I0224 16:46:06.768773 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532520-pd6tl"] Feb 24 16:46:07 crc kubenswrapper[4982]: I0224 16:46:07.168164 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3a9b30-4c42-4dc0-9785-5406c44c31df" path="/var/lib/kubelet/pods/bc3a9b30-4c42-4dc0-9785-5406c44c31df/volumes" Feb 24 16:46:14 crc kubenswrapper[4982]: I0224 16:46:14.145884 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:46:14 crc kubenswrapper[4982]: E0224 16:46:14.147277 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:46:27 crc kubenswrapper[4982]: I0224 16:46:27.146179 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:46:27 crc kubenswrapper[4982]: E0224 16:46:27.147084 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:46:40 crc kubenswrapper[4982]: I0224 16:46:40.147033 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:46:40 crc kubenswrapper[4982]: E0224 16:46:40.150668 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:46:51 crc kubenswrapper[4982]: I0224 16:46:51.147023 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:46:51 crc kubenswrapper[4982]: E0224 16:46:51.148171 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:46:58 crc kubenswrapper[4982]: I0224 16:46:58.141946 4982 scope.go:117] "RemoveContainer" containerID="d5468bcf1438040a0172907a46c308b07cba77d25721f7830f717d7a20496cd7" Feb 24 16:47:06 crc kubenswrapper[4982]: I0224 16:47:06.148112 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:47:06 crc kubenswrapper[4982]: E0224 16:47:06.149832 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b79sf_openshift-machine-config-operator(bf688571-4e47-42da-80b4-0d54580ce6c8)\"" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" podUID="bf688571-4e47-42da-80b4-0d54580ce6c8" Feb 24 16:47:21 crc kubenswrapper[4982]: I0224 16:47:21.147853 4982 scope.go:117] "RemoveContainer" containerID="457e1233095d10f7ead1e2344c2bd7c75435da6c5e3b6eae081ef1e92cd4ac50" Feb 24 16:47:21 crc kubenswrapper[4982]: I0224 16:47:21.826109 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b79sf" event={"ID":"bf688571-4e47-42da-80b4-0d54580ce6c8","Type":"ContainerStarted","Data":"ef13a921c6bfbd35f60526673d592d7e1ec25c14b7675cce4b91c1cdcbf64302"} Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.140809 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29532528-7pg8w"] Feb 24 16:48:00 crc kubenswrapper[4982]: E0224 16:48:00.141855 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f6cd5db-f350-46b6-a239-0f92182868c5" containerName="oc" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.141867 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f6cd5db-f350-46b6-a239-0f92182868c5" containerName="oc" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.142080 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f6cd5db-f350-46b6-a239-0f92182868c5" containerName="oc" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.142909 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.144984 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.145113 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-654q8" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.145322 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.150582 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532528-7pg8w"] Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.178480 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4wck\" (UniqueName: \"kubernetes.io/projected/359b97d5-6fe7-4f65-9785-7ad1612424e2-kube-api-access-q4wck\") pod \"auto-csr-approver-29532528-7pg8w\" (UID: \"359b97d5-6fe7-4f65-9785-7ad1612424e2\") " pod="openshift-infra/auto-csr-approver-29532528-7pg8w" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.281024 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4wck\" (UniqueName: \"kubernetes.io/projected/359b97d5-6fe7-4f65-9785-7ad1612424e2-kube-api-access-q4wck\") pod \"auto-csr-approver-29532528-7pg8w\" (UID: \"359b97d5-6fe7-4f65-9785-7ad1612424e2\") " pod="openshift-infra/auto-csr-approver-29532528-7pg8w" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.302000 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4wck\" (UniqueName: \"kubernetes.io/projected/359b97d5-6fe7-4f65-9785-7ad1612424e2-kube-api-access-q4wck\") pod \"auto-csr-approver-29532528-7pg8w\" (UID: \"359b97d5-6fe7-4f65-9785-7ad1612424e2\") " pod="openshift-infra/auto-csr-approver-29532528-7pg8w" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.459784 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" Feb 24 16:48:00 crc kubenswrapper[4982]: I0224 16:48:00.997798 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29532528-7pg8w"] Feb 24 16:48:01 crc kubenswrapper[4982]: W0224 16:48:01.008445 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod359b97d5_6fe7_4f65_9785_7ad1612424e2.slice/crio-8285c0953223da9bcc2816e51348fc09e014cc90049b050bf343b6ad48e23678 WatchSource:0}: Error finding container 8285c0953223da9bcc2816e51348fc09e014cc90049b050bf343b6ad48e23678: Status 404 returned error can't find the container with id 8285c0953223da9bcc2816e51348fc09e014cc90049b050bf343b6ad48e23678 Feb 24 16:48:01 crc kubenswrapper[4982]: I0224 16:48:01.010872 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 16:48:01 crc kubenswrapper[4982]: I0224 16:48:01.392789 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" event={"ID":"359b97d5-6fe7-4f65-9785-7ad1612424e2","Type":"ContainerStarted","Data":"8285c0953223da9bcc2816e51348fc09e014cc90049b050bf343b6ad48e23678"} Feb 24 16:48:02 crc kubenswrapper[4982]: I0224 16:48:02.405003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" event={"ID":"359b97d5-6fe7-4f65-9785-7ad1612424e2","Type":"ContainerStarted","Data":"3560a939bc39d9b7be22f1b314a50bad4b48db2b8a7df0a4329fb97dd97ee566"} Feb 24 16:48:02 crc kubenswrapper[4982]: I0224 16:48:02.435304 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" podStartSLOduration=1.49849891 podStartE2EDuration="2.435276469s" podCreationTimestamp="2026-02-24 16:48:00 +0000 UTC" firstStartedPulling="2026-02-24 16:48:01.010486715 +0000 UTC m=+7142.629545208" lastFinishedPulling="2026-02-24 16:48:01.947264274 +0000 UTC m=+7143.566322767" observedRunningTime="2026-02-24 16:48:02.429028309 +0000 UTC m=+7144.048086842" watchObservedRunningTime="2026-02-24 16:48:02.435276469 +0000 UTC m=+7144.054334992" Feb 24 16:48:03 crc kubenswrapper[4982]: I0224 16:48:03.417354 4982 generic.go:334] "Generic (PLEG): container finished" podID="359b97d5-6fe7-4f65-9785-7ad1612424e2" containerID="3560a939bc39d9b7be22f1b314a50bad4b48db2b8a7df0a4329fb97dd97ee566" exitCode=0 Feb 24 16:48:03 crc kubenswrapper[4982]: I0224 16:48:03.417405 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" event={"ID":"359b97d5-6fe7-4f65-9785-7ad1612424e2","Type":"ContainerDied","Data":"3560a939bc39d9b7be22f1b314a50bad4b48db2b8a7df0a4329fb97dd97ee566"} Feb 24 16:48:05 crc kubenswrapper[4982]: I0224 16:48:05.012579 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" Feb 24 16:48:05 crc kubenswrapper[4982]: I0224 16:48:05.112662 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4wck\" (UniqueName: \"kubernetes.io/projected/359b97d5-6fe7-4f65-9785-7ad1612424e2-kube-api-access-q4wck\") pod \"359b97d5-6fe7-4f65-9785-7ad1612424e2\" (UID: \"359b97d5-6fe7-4f65-9785-7ad1612424e2\") " Feb 24 16:48:05 crc kubenswrapper[4982]: I0224 16:48:05.119345 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359b97d5-6fe7-4f65-9785-7ad1612424e2-kube-api-access-q4wck" (OuterVolumeSpecName: "kube-api-access-q4wck") pod "359b97d5-6fe7-4f65-9785-7ad1612424e2" (UID: "359b97d5-6fe7-4f65-9785-7ad1612424e2"). InnerVolumeSpecName "kube-api-access-q4wck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 16:48:05 crc kubenswrapper[4982]: I0224 16:48:05.215684 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4wck\" (UniqueName: \"kubernetes.io/projected/359b97d5-6fe7-4f65-9785-7ad1612424e2-kube-api-access-q4wck\") on node \"crc\" DevicePath \"\"" Feb 24 16:48:05 crc kubenswrapper[4982]: I0224 16:48:05.439947 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" event={"ID":"359b97d5-6fe7-4f65-9785-7ad1612424e2","Type":"ContainerDied","Data":"8285c0953223da9bcc2816e51348fc09e014cc90049b050bf343b6ad48e23678"} Feb 24 16:48:05 crc kubenswrapper[4982]: I0224 16:48:05.439987 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8285c0953223da9bcc2816e51348fc09e014cc90049b050bf343b6ad48e23678" Feb 24 16:48:05 crc kubenswrapper[4982]: I0224 16:48:05.440343 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29532528-7pg8w" Feb 24 16:48:06 crc kubenswrapper[4982]: I0224 16:48:06.096254 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29532522-2zqnv"] Feb 24 16:48:06 crc kubenswrapper[4982]: I0224 16:48:06.106221 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29532522-2zqnv"] Feb 24 16:48:07 crc kubenswrapper[4982]: I0224 16:48:07.169267 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a902e48-0351-46eb-bc3a-68b0b9e572be" path="/var/lib/kubelet/pods/1a902e48-0351-46eb-bc3a-68b0b9e572be/volumes"